ESRA 2025 Preliminary Program
All time references are in CEST
Interviewers across the Survey Life Cycle 3 |
Session Organisers |
Dr Mariel Leonard (DIW-Berlin) Dr Zachary Smith (National Center for Health Statistics (NCHS))
|
Time | Wednesday 16 July, 09:00 - 10:30 |
Room |
Ruppert 0.33 |
Interviewers are central to survey operations. From qualitative question design and evaluation – cognitive interviewers, focus group moderators, and even expert reviewers – to quantitative survey administration in the field. A large body of literature has identified various ways the identity, behavior, and disposition of interviewers influence the quality of data collected. And, a growing consensus is developing that in both qualitative and quantitative aspects, interviewers should be understood not merely as mindless, faceless data collection machines, but as researchers that contribute to the research process. Indeed, the consequences of ignoring interviewers’ humanity and research capabilities may be particularly important for data quality, as research on interviewer effects has already shown.
This panel invites contributions addressing either qualitative pre-fielding or quantitative survey administration that consider:
1. Whether and how best interviewers can be incorporated into the research team;
2. How interviewers affect the quality of data collected (positively or negatively);
3. How interviewers navigate complex issues, for example, sensitive survey topics or respondents unaccustomed to the survey process;
4. Considerations of the “interviewer” in a self-administered context.
Keywords: interviewers, interviewer effects, cognitive interviewing, data quality, sensitive topics
Papers
Training Cognitive Interviewers to Be Members of the Research Team
Dr Alisú Schoua-Glusberg (Research Support Services Inc.) - Presenting Author
The literature describes two different possible roles for cognitive testing interviewers. Willis and Artino (2013) and Miller et al. (2014) discuss how this varies depending on the cognitive testing methodology. In think-alouds, the interviewer barely guides the respondent in expressing their thoughts while responding and may use scripted probing administered to all respondents identically. With the probing technique, interviewers elicit narratives from respondents, that provide context from their lives to their answers to the survey questions, revealing how each question was interpreted and whether the response given matches that reality.
Unquestionably, most survey interviewers can be trained on the first type, where probing is fully scripted and spontaneous probing not encouraged. Such approach may be best in projects with large numbers of interviews, in situations with no trained cognitive interviewers and limited training resources. The second approach yields richer results because there is no need to identify possible error a priori, thus reducing the effects of researcher bias. This approach aims to elicit patterns of interpretation and, through it, uncover response error wherever present.
However, most studies cannot afford the time or cost of training survey interviewers as good qualitative interviewers. Interviewers must understand testing goals for questions, how to probe on narrative to make sure the respondent’s reality supports their choice of answer, learn to ascertain when they have probed enough and when they need additional information. These are analyst tasks that have to be exercised on the fly during interviews. When writing interview notes, they need to distinguish what to include and what is not relevant.
This presentation will focus on the cognitive interviewer’s role as researcher and how survey field interviewers can be trained to be first line researcher-analysts using the probing technique. It will include a discussion of how best to do
Interviewer Understandings of Rapport
Dr Mariel Leonard (DIW-Berlin) - Presenting Author
Rapport is considered essential to the success of interviews, particularly those that are long or involve sensitive questions. Literature on survey methods – from Standard Survey Interviewing to in-house training manuals – emphasize the absolute necessity of establishing rapport with survey respondents. Yet, despite its frequent appearance in the literature, "rapport" is a term that is often left undefined, resulting in unclear standards for training and field monitoring.
In this paper, I consider how interviewers themselves define rapport within their work. I analyze data from in-depth interviews collected as part of the Interviewer Quality of Life Study. I find that interviewers are able to provide cogent definitions of rapport that may provide a strong foundations for improving interviewer training as well as monitoring of rapport-building in the field.
What Took You so Long? The Role of Interviewer Experience as a Determinant of Interview Duration
Mr Söhnke Bergmann (Austrian Central Bank (OeNB)) - Presenting Author
Mr Tobias Schmidt (German Central Bank (Deutsche Bundesbank))
This study examines the influence of interviewer experience on the duration of survey interviews, with a particular focus on survey-specific experience. Using para-data from a large-scale panel survey on household finances in Germany (“Panel on Household Finances (PHF)”) we are the first to investigate whether interviewers’ familiarity with a survey affects interview speed in panel waves. For this tri-annual survey, we are able to link interviewers across waves using a unique interviewer-ID. We are thus able to observe the development of the speed with which they conduct interviews not only within one wave but across waves. Our findings indicate that interviewers with more experience conducting the PHF-survey complete interviews significantly faster than those who do not have any experience or less experience with the PHF survey. Additionally, we confirm previous findings that (1) interviewers become faster within a wave of the survey and (2) interviewers with greater overall experience, i.e. years spend with the survey company, are quicker. These results are important because interview duration is often used as a proxy measure for data quality ((Yan and Tourangeau, 2008; Loosveldt and Beullens, 2013). Therefore, survey designers should consider these findings when evaluating data quality.
What can Performance Indicators tell us About Face-to-Face Interviewers and how can we use Them to Evaluate Quality?
Dr Christian Haag (LIfBi - Leibniz Institute for Educational Trajectories) - Presenting Author
Ms Martyna Flis (formerly LIfBi - Leibniz Institute for Educational Trajectories)
Dr Jutta von Maurice (LIfBi - Leibniz Institute for Educational Trajectories)
Interviewers are essential actors in face-to-face surveys because they have considerable influence on respondent contact and cooperation, and on data accuracy. While their individuality and deviation from standard protocols might be welcome in the recruitment process, the actual data collection often calls for strict standardization.
Personal interviewing requires a range of specific activities and corresponding skills from the interviewer. By looking at the domains of (1) contact and cooperation and (2) accuracy, we investigate specific tasks that might require partially incompatible skills and that are related with either unit- or item-nonresponse. We propose a set of performance indicators to break down interviewer variance into more specific components and to facilitate further investigation and agency in controlling and optimizing interviewing from a survey-methodological perspective.
We conduct our analyses with data from an advanced panel wave of a sample of representatively drawn adults in Germany (National Educational Panel Study, NEPS). With a sample of 217 interviewers and 7,727 respondents, we isolate the interviewer variance and then conduct hierarchical multilevel logistic and linear models to see which indicators retain considerable shares of interviewer variance even when controlling for variables on the levels of interviewer and respondent.
We find substantial interviewer variance remaining for two out of three contact and cooperation indicators as well as for three out of six accuracy indicators. The results further suggest that the two domains are indeed characterized by distinct tasks and skillsets and that there are different types of interviewers with varying performance levels. With our results we aim to contribute to discussions on how to cooperate with interviewers as experts and how to value their contributions while demanding specific behaviors and performance.
Understanding the Variability in Post-Survey Interviewer Observations in a National Panel Survey: Evidence from Health and Retirement Study
Miss Chendi Zhao (University of Michigan, Ann Arbor) - Presenting Author
Dr Brady T. West (University of Michigan, Ann Arbor)
Mr Abdelaziz Adawe (University of Michigan, Ann Arbor)
Post-survey interviewer observations provide valuable insights into the survey response process, yet substantial variability in these observations among both respondents and interviewers raises concerns about their reliability. While previous studies have identified this variability and demonstrated that interviewers contribute to this variance, and also shown that these observations are systematically linked to survey and respondent characteristics, few have explored this issue longitudinally. This study addresses that gap by investigating the sources of variability in interviewer observations using data from 11 post-survey assessments collected across three waves (2016, 2018, 2020) of the Health and Retirement Study (HRS), a national U.S. survey focusing on individuals aged 51 and older.
A multilevel analysis was conducted with time, respondent, and interviewer as three levels. Key predictors included respondent demographics, interview mode, and duration. Three models were fitted: an unconditional model, a model including the fixed effects of all predictors and their interaction with time, and a model that only includes all significant terms from the second model.
The first part of the analysis focused on comparing the decomposition of variance in each observation across levels between the unconditional and subsequent models. The results indicate that interviewers account for a notable portion of variance over time in certain observations, compared to respondents. In the second part of analysis, the full model revealed significant fixed effects related to interview-specific characteristics and respondent demographics. Specifically, longer interviews and Computer-Assisted Telephone Interviewing were associated with more negative ratings compared to shorter interviews and Computer-Assisted Personal Interviewing. Furthermore, older respondents, females, racial minorities, and individuals with lower education levels tended to receive poorer ratings, resulting in lower overall assessments of interview quality. These findings emphasize the need for standardized interviewer training and suggest potential improvements when utilizing interviewer observations to understand data quality.