Measuring and modeling response behavior and response quality in web surveys 2 |
|
Chair | Ms Carina Cornesse (Mannheim University and GESIS ) |
Coordinator 1 | Dr Jean Philippe Décieux (Université du Luxembourg) |
Coordinator 2 | Ms Jessica Herzing (Mannheim University) |
Coordinator 3 | Professor Jochen Mayerl (University of Kaiserslautern) |
Coordinator 4 | Ms Alexandra Mergener (Federal Institute for Vocational Education and Training) |
Coordinator 5 | Mr Philipp Sischka (Université du Luxembourg) |
Providing high-quality answers requires respondents to devote their attention to completing the questionnaire and, thus, thoroughly assess each question. This is particularly challenging in web surveys, which lack the presence of interviewers who can assess how carefully respondents answer the questions and motivate them to be more attentive if necessary. Inattentiveness can provoke response behavior that is commonly associated with measurement and nonresponse error by only superficially comprehending the question, retrieving semi- or irrelevant information, not properly forming a judgement, or failing in mapping a judgement to the available response options. Consequently, attention checks such as Instructed Response Items (IRI) have been proposed to identify inattentive respondents. An IRI is included as one item in a grid and instructs the respondents to mark a specific response category (e.g., “click strongly agree”). The instruction is not incorporated into the question text but is placed like a label of an item. The present study is focused on IRI attention checks as these (i) are easy to create and implement in a survey, (ii) do not need too much space in a questionnaire (i.e., one item in a grid), (iii) provide a distinct measure of failing or passing the attention check, (iv) are not cognitively demanding, and (v) –most importantly–provide a measure of how thoroughly respondents read items of a grid.
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.
Selecting respondents for a web survey can be done in many different ways. Choosing a probability approach considerably limits the number of these methods. However, most available web surveys are based on nonprobability samples. The most widespread method for selecting respondents in web surveys is the use of already existing online panels. We explore the effect of different recruitment strategies for online panels on the quality of survey data. We replicated the same questionnaire (which included about 25 non-demographic, factual questions, for which aggregated administrative data was available) with five different German online panel providers. This set of online panels consists of one commercial probability sample (n = 5,000), one academic nonprobability sample (n = 2,500) and three commercial nonprobability samples (each n = 5,000), each using a different method of recruitment. We report on differences in item nonresponse, response styles and other indicators of data quality. Finally, we compare the surveys with aggregated administrative data (beyond demographics) of the same population.
RELEVANCE & RESEARCH QUESTION
Screens are everywhere. And so, of course, are interviews. Market research now happens in real life.
The author emphasizes the importance of the interview and its environment for several reasons: It’s the core of a good research practice. Its costs heavily affect the economic health of research businesses. The fact that we don’t see the actual interview environment might make researchers unaware of potential impacts on the answering behaviour. Panel interviews compete with multiple distractions that come with ubiquitous devices. We can assume that the interview environment is under constant change. Last not least, we need to include in our equation the beginning shift from the interview to the observation.
METHODS & DATA
A survey with a total N = 1.049 provides a comprehensive and representative picture of present-day interview environments. The respondents were free to choose time, place and device. The consistency and commitment to the online interview was measured using a fit statistic from a MaxDiff exercise.
RESULTS
A large share of panel interviews is done at home. Only 2 % of the interviews can be classified as truly mobile (out-of-home, using a mobile data connection). 88 % of the resp. show a 100-%-consistency in their answering behaviour.
The quality of the answering behaviour is largely influenced by non-situational parameters such as the general personality trait of honesty and truthfulness as measured with the HEXACO-60 personality inventory. It’s not or only to a neglectable extent affected by parameters of the actual interview situation. But, there are a few remarkable exceptions such as the consumption of alcohol prior to the interview.
ADDED VALUE
For research designs, it’s key to keep in mind in which environment panel interviews take place. For some research designs that expand the scope from lab situations to the real world the very low share of truly mobile interviews is bad news, whereas results indicate that interview environments are more homogeneous than expected.