ESRA 2025 Preliminary Program
All time references are in CEST
Interviewers across the Survey Life Cycle |
Session Organisers |
Dr Mariel Leonard (DIW-Berlin) Dr Zachary Smith (National Center for Health Statistics (NCHS))
|
Time | Tuesday 15 July, 09:00 - 10:30 |
Room |
Ruppert 011 |
Interviewers are central to survey operations. From qualitative question design and evaluation – cognitive interviewers, focus group moderators, and even expert reviewers – to quantitative survey administration in the field. A large body of literature has identified various ways the identity, behavior, and disposition of interviewers influence the quality of data collected. And, a growing consensus is developing that in both qualitative and quantitative aspects, interviewers should be understood not merely as mindless, faceless data collection machines, but as researchers that contribute to the research process. Indeed, the consequences of ignoring interviewers’ humanity and research capabilities may be particularly important for data quality, as research on interviewer effects has already shown.
This panel invites contributions addressing either qualitative pre-fielding or quantitative survey administration that consider:
1. Whether and how best interviewers can be incorporated into the research team;
2. How interviewers affect the quality of data collected (positively or negatively);
3. How interviewers navigate complex issues, for example, sensitive survey topics or respondents unaccustomed to the survey process;
4. Considerations of the “interviewer” in a self-administered context.
Keywords: interviewers, interviewer effects, cognitive interviewing, data quality, sensitive topics
Papers
Comparing Interviewer Behavior to Interviewing Instructions: Interviewer Errors Undermine Accurate Measurement in the American National Election Study
Mr Hector Santa Cruz (Stanford University)
Dr Matthew Berent (Matt Berent Consulting)
Dr Jon Krosnick (Stanford University) - Presenting Author
Dr Arthur Lupia (University of Michigan)
Dr Alexander Tahk (University of Wisconsin)
Using survey data to estimate population parameters and identify differences between population subgroups requires that variables of interest are measured in the same way for all respondents. Face-to-face interviews conducted in respondents’ homes may be especially prone to measurement inconsistencies due to a traditional absence of real-time monitoring and correction by supervisors. We explored the prevalence of measurement inconsistencies in face-to-face interviews using data from the 2008 American National Elections Study Time Series Study. Transcripts of interviewers asking, and respondents answering, open-ended questions during interviews conducted in respondents’ homes revealed that rates of interviewer deviations from question administration instructions ranged from 14.80% for some open-ended questions to 66.94% for others. The likely consequences of this are suppression of answers to subjective questions respondents would have given if interviewers had not deviated, and a net reduction in the percent of respondents giving objectively correct answers to factual questions. We also identified respondent characteristics associated with deviations from question administration instructions. The results suggest caution when estimating population characteristics from face-to-face survey data, and they highlight the need for enhanced interviewer training to increase question administration consistency.
Gatekeeping at the survey interview? The impact of (non-)compliance with the survey protocol on women's rights reports in the Middle East and North Africa
Dr Kathrin Thomas (University of Aberdeen) - Presenting Author
Dr Isabella Kasselstrand (University of Aberdeen)
Compliance with the survey protocol is essential for the quality of survey estimates in interviewer-administered public opinion surveys. Interviewer characteristics and deviation from set rules, e.g., by allowing third parties or disregarding other regulations to ensure privacy, anonymity, and standardisation, can be harmful. While the effects of non-compliance, have already been studied and possible solutions to tackle influences have been proposed, our paper addresses the problem in a region that has only observed advances in survey infrastructures over the past decades and whose countries remain at most hybrid democracies: The Middle East and North Africa. We employ comparative survey data collected by Arab Barometer (2019 / 2019) to study diversity of interviewers and compliance with the survey protocol as well as the impact of these on reports of women’s rights in the region as well as their impacts on self-reports on women’s rights. Paradata collected after each interview allows us to consider interviewer characteristics and compliance with the protocol. We find some variation regarding compliance with the standardised survey protocol across contexts as well as effect on reporting of women’s rights by interviewer characteristics and the presence and type of bystanders. This has implications for survey data collections and analysis: Especially for culturally sensitive items, surveyors may wish to better train interviewers to comply with the anonymity and privacy function of standardised survey interviews; data providers might want to collect, provide, and report relevant information about the survey process to allow users to control for potential effects.
What types of survey questions are prone to interviewer effects? Evidence based on 31,000 ICCs from 28 countries
Dr Ádám Stefkovics (HUN-REN Centre for Social Sciences) - Presenting Author
Ms Anna Sára Ligeti (HUN-REN Centre for Social Sciences)
Interviewer effects are a common challenge in face-to-face surveys. Understanding the conditions that make interviewer variance more likely to occur is essential in tackling sources of bias. Earlier evidence suggests that certain features of the survey instrument provide more ground for interviewer influence. For instance, attitudinal, sensitive, complex or open-ended questions invite more interviewer variance. In this paper, we aim to validate earlier results, previously derived from single-country studies, by using the large cross-national sample of the European Social Survey. We compare 31,270 intraclass-correlations (ICCs) derived from 1004 survey questions from 28 countries using data from 10 waves of the ESS. The questions were manually coded based on several characteristics. These features of survey questions were then used as predictors of ICCs in multilevel models. The results show that question characteristics account for a significant portion of the variation in ICCs, with certain types, such as attitude and non-factual questions, items appearing later in the survey, and those using showcards, being especially susceptible to interviewer effects. Our findings have important implications for both interviewer training and questionnaire design.
EVALUATING INTERVIEWER PERFORMANCE. THE BENEFITS OF PROPENSITY-ADJUSTED PERFORMANCE SCORES IN COMPLEX STUDIES FOR QUALITY MONITORING.
Ms Laura Löwe (Leibniz Institute for Educational Trajectories) - Presenting Author
The interpretation of interviewer-level cooperation rates regarding the evaluation of interviewer performance can be misleading as cooperation rates commonly do not account for the response propensity of contact attempts. For this reason, there is an increasing demand for interviewer performance indicators that explicitly account for survey complexity and the difficulty level of contact attempts. It is expected that propensity-adjusted performance indicators are more accurate than indicators that rely on interviewers alone. In particular if success probabilities of contacts ascribed to an interviewer are unequal, propensity-adjusted performance indicators are relevant for performance assessment. Using data from the National Educational Panel Study, propensity-adjusted interviewer performance scores are estimated based on the success probability of each contact attempt. The informative value of propensity-adjusted interviewer performance scores for interviewer assessment compared to unweighted cooperation rates is documented, regarding different respondent groups and different modes, namely computer-assisted telephone and personal interviews including mode switch studies. The empirical findings reveal that propensity-adjusted interviewer performance scores are less dependent on the difficulty level of contact attempts. Regarding samples with heterogeneous respondent characteristics and samples in which contacts are assigned to interviewers by geographical regions, interviewer-level cooperation rates do not comprehensively reflect interviewer performance, and propensity-adjusted scores are indispensable to fairly evaluate interviewer performance.