Interviewers’ Deviations in Surveys 2 |
|
Convenor | Dr Natalja Menold (GESIS ) |
Coordinator 1 | Professor Peter Winker (University of Giessen) |
Interviewers are – intentionally or unintentionally – a potential source of survey errors in the data collection process. In the following paper we address whether interviewers introduce measurement error on substantive variables using data from PIAAC Germany. In our analyses we compare the variance introduced by interviewers on two types of data sets: direct measures of respondents´ competencies and background variables. The analyses are supplemented with interviewer characteristics and attitudes collected during an interviewer survey.
In this paper deviations in question reading by interviewers are analyzed. In our first (behavior coding) study we found that interviewers spontaneously added numbers to response options. Due to this change in question reading, respondents appeared to be better able to formulate an answer. In our second (experimental) study we specifically tested the effect of reading numbers. Remarkably, no effects were found for reading numbers on response behavior. We present several explanations for the different findings between the two studies, and emphasize the importance of making surveys cohesive and coherent so deviating from standardized scripts is not necessary.
Despite attempts to fully standardize survey interviewing, interactions between respondents and interviewers are characterized by deviations from this protocol. Using structural equation modeling, this paper empirically examines the underlying latent factor structure for interviewer and respondent behaviors across different types of questions. We then assess how these factors affect data quality as measured by response latency and number of entry edits and how this relationship differs by question type. Finally, we investigate how these relationships change over the course of the data collection period, and are affected by sample composition and interviewers encountering less cooperative respondents.
This paper analyses how process-generated paradata can be used to investigate interviewers’ deviations from standardized interviewing. Data comes from the Survey of Health, Ageing and Retirement in Europe (SHARE), a cross-national face-to-face survey. Derived from keystroke data, we investigate interviewers’ reading-out durations of items without respondent interaction. Based on the criteria of adapting behaviour to the respondent and learning effects over the course of fieldwork, three general interviewing patterns can be identified: standardized interviewing, tailoring, and speeding. We further test if the interviewing patterns matter for the obtained data quality.