It’s the Interviewers! New developments in interviewer effects research 4 |
|
Chair | Dr Salima Douhou (City University of London, CCSS ) |
Coordinator 1 | Professor Gabriele Durrant (University of Southampton) |
Coordinator 2 | Dr Olga Maslovskaya (University of Southampton) |
Coordinator 3 | Dr Kathrin Thomas (City University of London, CCSS) |
Coordinator 4 | Mr Joel Williams (TNS BMRB) |
Interviewers have many different tasks administering a survey and are thus crucial in the data collection process. However, they are – intentionally or unintentionally – a potential source of survey errors. Today, a large body of literature has accumulated measuring interviewer effects on unit nonresponse. Nevertheless, there are fewer studies explaining interviewer effects found, even though explaining interviewer effects, developing methods to reduce them, and finding ways to adjust for them in analyses would seemingly benefit survey practitioners and analysts alike.
Recently, West and Blom (2016) have published a research synthesis on factors explaining interviewer effects on various sources of survey error, including unit nonresponse. They find that the literature reports great variability across studies in the significance and even direction of predictors of interviewer effects on unit nonresponse. This variability in findings across studies may be due to a lack of consistency in key characteristics of the surveys examined, such as the group of interviewers employed, the survey organizations managing the interviewers, the sampling frame used, and the populations and time periods observed. In addition, the explanatory variables available to the researchers examining interviewer effects on nonresponse differ greatly across studies and may thus influence the results.
This diversity in findings, survey characteristics, and explanatory variables available for analyses call for a more orchestrated effort in explaining interviewer effects on unit nonresponse. Our paper fills this gap, as our analyses are based on four German surveys administered through the same survey organization and the same pool of interviewers, and use the same area control variables and identical explanatory variables at the interviewer level.
Despite the numerous similarities across the four surveys, our results show a high variability of intervierwer characteristics explaining interviewer effects on unit nonresponse.
Literature
West, B. T., & Blom, A. G. (2016). Explaining interviewer effects: A research synthesis. Journal of Survey Statistics and Methodology, First published online: November 1, 2016, 1-37. doi: doi: 10.1093/jssam/smw024
Many, if not all, face-to-face surveys are subject to interviewer effects on a range of outcomes. Previous research shows that interview length and speed suffer to a large extent from interviewer effects. However, straight-lining tendency and other satisficing symptoms have also been shown to be subject to interviewer effects, albeit to a smaller extent. Moreover, one can expect that interview speed and satisficing/straight-lining tendency are related: Higher speed can lead to an increase in cognitive difficulty, which might be dealt with through straight-lining, whereas straight-lining can decrease response latency and hence increase the interview speed. In this paper, we first repeat previous analyses of interviewer effects on interview speed and straight-lining tendency for the seventh round of the European Social Survey. The results confirm previous outcomes: A large intra-interviewer correlation coefficient for interview speed, and a somewhat smaller intra-interviewer correlation coefficient for straight-lining. We then study the correlation between interview speed and straight-lining tendency, without determining causality, and decompose this correlation in an interviewer and respondent-level correlation. Results show that the positive interviewer-level correlation between interview speed and straight-lining tendency surpasses the respondent-level correlation. This indicates that ‘fast’ interviewers are the ones carrying out interviews during which more straight-lining occurs.
This presentation deals with the comparison of different kinds of modelling of interviewer experience in computer-assisted personal interviews.
Interviewer experience is one of the most studied influences on unit nonresponse, especially on the response rate. Reviewing the literature of previous studies about the impact of interviewer experience on unit nonresponse, it can be recognized that researchers use different kinds of modelling. Regarding to the literature three levels of interviewer experience can be distinguished: It can be arranged into a macro level (e.g. experience as interviewer over the lifetime), a meso level (e.g. experience over multiple waves of a longitudinal survey) and a micro level (e.g. experience over a survey’s field period). Thus, it is hardly possible to compare existing results and it is not clear if different kinds of modelling influence the results. Accordingly it is necessary to analyse the modelling of interviewer experience.
For this analyse the information about the interviewers from the National Educational Panel Study (NEPS) are used, especially the information of the starting cohort one (SC1). The aim is to find out, if there are differences in the results regarding the explained variance on unit nonresponse due to the following kinds of modelling.
Macro level:
• years of experience in a specific survey organisation
Meso level:
• number of interviewer deployment within a specific starting cohort (NEPS; SC1)
Micro level:
• number of conducted interviews within a specific wave (NEPS; SC1; wave 3)
Does interviewer presence affect answers in factorial survey experiments? Do they enhance data quality or foster social desirability bias if sensitive dimensions are included? Which role do interviewer characteristics play?
Over the past decades, factorial survey experiments have become an increasingly popular tool in many subfields of the social sciences. Part of their popularity clearly stems from the possibility to model decision scenarios more realistically than single-item questions: By including various dimensions at once, vignettes can account for the fact that real-life decisions typically require a simultaneous consideration of several factors. An independent but joint experimental variation of the factors allows the researcher to quantify their impact on the requested evaluations and to estimate trade-offs and interactions between the different factors.
Factorial survey experiments require visual representation and are therefore suitable for implementations in completely self-administered survey modes (CASI and PAPI) as well as in self-administered modules within face-to-face interviews. For practitioners who have to choose between these modes of data collection, it is crucial to know if the additional costs for personal interviews pay off in terms of better data quality, that is, less item-nonresponse, more consistent answers and less heuristic response behaviour. Nonetheless and somewhat surprisingly, interviewer effects in factorial survey experiments have to the best of our knowledge not received any attention in survey research so far.
We will make a first step to fill this void by presenting results from a vignette module on the fairness of earnings that has been completed in two different survey modes, namely a mode with interviewer presence vs. a completely self-administered mode. For both survey modes, random samples of the German residential population were used.
We are mainly interested in the effects of interviewer presence on item nonresponse, inconsistency of responses and measurement errors such as response sets, but also in the question to what extent interviewer presence affects substantive results gained by the factorial survey experiment. Although it has been argued that factorial surveys might be an appropriate method to reduce social desirability bias, it is unclear if this still holds when an interviewer is present. Therefore we focus on sensitive dimensions in the vignette texts and compare their effects across survey modes.
Additionally, we collected data on the interviewer characteristics which enable us to examine if for instance the interviewer’s sex has an influence on the respondent’s evaluation of the corresponding vignette dimension.