Detecting, Explaining and Managing Interviewer Effects in Surveys 4 |
|
Session Organisers |
Dr Daniela Ackermann-Piek (GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany) Mr Brad Edwards (Westat) Dr Jette Schröder (GESIS – Leibniz Institute for the Social Sciences, Mannheim, Germany) |
Time | Thursday 18th July, 16:00 - 17:30 |
Room | D16 |
How much influence do interviewers have on different aspects of the survey process and how can we better reduce their negative impact on the data quality as well as enhance their positive impact?
Although interviewer effects have been studied over several generations, still, interviewer effects are of high interest on interviewer-administered surveys. Interviewers are involved in nearly all aspects of the data collection process, including the production of sampling frames, acquisition of contact and cooperation with sample units, administration of the survey instrument, and editing and transition of data. Thus, interviewers can cause errors and prevent errors in nearly all aspects of a survey.
However, the detection of interviewer effects is only a first step. Thus, it is of interest to understand why interviewer effects occur. Although there are various studies explaining interviewer effects using multiple sources of data (e.g., paradata, interviewer characteristics, response times, etc.), the results are inconclusive. In addition, it is essential to prevent negative interviewer effects before they occur to ensure that interviewer-administered surveys can produce high-quality data. There are multiple ways to intervene: interviewer training, monitoring during fieldwork, adaptive fieldwork design or switching the survey mode, etc. However, still, relatively little is known about how all these different methods can effectively reduce interviewer error because there is a lack of experimental studies.
We invite researchers to submit papers dealing with aspects of detecting, explaining and preventing interviewer effects in surveys. We are especially interested in quasi-experimental studies on the detection, explanation, and prevention of interviewer error in surveys, and on the development or encouragement of interviewer ability to repair or avert errors. We welcome researchers and practitioners from all disciplines across academic, governmental, private and voluntary sectors to contribute to our session.
Keywords: Interviewer effects, Interviewer training, Interviewer characteristics, Paradata, Total Survey Error
Mr Luke Taylor (Kantar Public)
Ms Sonila Dardha (Kantar Public) - Presenting Author
Understanding Society is a continuous longitudinal study conducted by Kantar Public (in partnership with Natcen) on behalf of the Institute for Social and Economic Research. Over the last year, we have created a quality control system to check interviewers’ administration of the survey.
The first stage aims to identify specific interviewers who elicit responses from their participants that are notably different from the average responses collected overall. Multilevel modelling is used to nest respondents (level 1) within interviewers (level 2) and examine these respondent sets individually. To try and control for demographic differences in respondent sets and area effects, we include a range of socio-demographics and geo-demographics as predictors.
We model different key outcomes which look at specific data quality indicators such as item non-response, responses to questions which trigger key routing, satisficing, interview duration and length of open-ended questions. We use random intercept models to capture interviewer variance i.e. in addition to the overall intercept, we model an intercept for each interviewer cluster separately. For each outcome, interviewers are flagged if the 95% confidence bounds of their random intercept does not cross the fixed intercept.
In a second stage, once we identify interviewers with atypical response patterns, we retrain them. These sessions focus on the standardised protocols interviewers should be following and the importance of reducing interviewer variance in field.
Finally, it is important that the analysis is repeated periodically to determine the impact which our intervention has had and to track the progress of the interviewers since they were retrained.
Mrs Birgit Jesske (infas Institute for Applied Social Sciences)
Ms Jennifer Weitz (infas Institute for Applied Social Sciences) - Presenting Author
Survey organizations have to ensure that errors and effects are minimized by validating their data collection process during the entire survey period. Particularly interviewer errors must be identified and remedied as early as possible. The actual validation takes place during or after completion of the field work - among other things – via monitoring or data record checks.
Monitoring, being an approved validation method in the CATI field, can also be used efficiently in face-to-face fields by listening to recordings. Therefore, the interviewers record their interviews as audio files and transmit them to the survey agency. In addition to monitoring, deviant interviewer behaviour can also be supervised by means of statistical methods. Deviant behaviour in this sense assumes that the interviewer’s behaviour influences the respondent’s answers and thus the data. “Together, these behavioural interactions between respondents and interviewers induce a dependence in responses within interviewers which is typically expressed as an intraclass correlation coefficient (ICC)” (Burnton-Smith et al. 2016).
Monitoring demands time as well as human resources, but offers the advantage of counteracting deviant behaviour in the on-going field by providing individual interviewer feedback. The ICC, however, measures the effect of interviewer behaviour on the distribution (moments of distribution, namely mean and variance). Thus, a sufficient and comparatively large number of interviews are necessary for the ICC to have an effect.
This paper examines the extent to which audio ratings and the ICC contribute to CAPI fieldwork monitoring. The results of both approaches are compared and analysed as to what extent they complement or replace each other. Findings are based on data from the panel study “Labour Market and Social Security” (PASS). With the introduction of audio recordings in the CAPI field, almost 5,000 audio files have been recorded per wave, out of which at least 10% will be rated systematically.
Mr Mikhail Bogdanov (National Research University 'Higher School of Economics') - Presenting Author
Mr Daniil Lebedev (National Research University 'Higher School of Economics')
While changing from one survey method to another, the role of the interviewer changes substantially, however, in current Russian methodological studies this problem is not given proper consideration. Furthermore, the expectations of the interviewers about the transition to a new mode of survey data collection may affect the data quality and therefore have a substantial impact on the whole success of the transition. Interviewers’ role and biases connected with personal interviewing become even more significant factors influencing data quality in case of large-scale panel surveys. Thus, it is essential to comprehend what factors influence expectations of the interviewers regarding successfulness of the transition to the new survey mode.
In this study, we analyze interviewers' perception of the change of face-to-face interview process (from PAPI to CAPI) through 20 in-depth interviews with interviewers of the Russian Longitudinal Monitoring Survey (RLMS-HSE) taken after the experimental transition to CAPI. RLMS is the largest and longest (since 1994) panel study in Russia. The main insight from the interviews is that the communicative aspect of the interview has changed substantially with the transition from PAPI to CAPI. According to the interviews, tablets have a crucial effect on the communication between interviewers and respondents even given that most interviewers and respondents knew each other from the previous waves.
Moreover, the structural equation modelling based on the survey of the interviewers showed that confidence in the use of technical devices (PCs, smartphones and tablets) fully mediates the relationship between age and expectations about the transition to CAPI.
Using quasi-experimental design (pre and post trainings surveys), we revealed the effect of trainings on the different aspects of the interviewers' expectations. We found that trainings principally have an effect on the expectations regarding the complexity of using the tablet to conduct the survey.
Ms Andrea Bauer (infas, Institute for Applied Social Sciences, Bonn, Germany) - Presenting Author
Mr Michael Ruland (infas, Institute for Applied Social Sciences, Bonn, Germany)
Panel studies in general, specifically in the face-to-face field, face the difficulty of motivating target persons for participation in long-term surveys. Especially for surveys that come with a supposedly higher respondent burden, panel attrition poses a significant risk.
The National Educational Panel Study (NEPS) surveys longitudinal data on educational processes and competence development in Germany. Due to its particular survey design and special requirements in the data collection process, the Starting Cohort Newborns (SC1) holds a special position compared to other NEPS starting cohorts. A significantly lower panel participation and cooperation rate may be expected here.
Target persons are children who, at the time of the first wave in 2012, were between six and eight months old. In addition to conducting competency tests with the infants, their parents have since been interviewed annually in their household surroundings. Furthermore, in the first three waves, every competency test was documented on videotape and personal interviews were audiotaped. Those requirements demanded a high level of trust on the participants' side to have their infant, themselves and even details of their household filmed by an interviewer who was unknown. As it were, the survey meant a significant burden in time and personal effort for the participating families. Hence, special importance is attached to interviewer-participant interaction.
This paper analyses the effects of interviewer assignments in face-to-face panel surveys, based on data from Starting Cohort Newborns (SC1). The basic assumption is that in the particular survey design of this starting cohort, assigning the same interviewer to a panel participant supports building trust and hence takes a positive effect on the family's participation in further panel waves. Therefore, participation effects of same-interviewer-assignment in follow-up waves are analysed by multivariate analyses under control of characteristics of both participants and interviewers.