ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Interviewers across the Survey Life Cycle

Session Organisers Dr Mariel Leonard (DIW-Berlin)
Dr Zachary Smith (National Center for Health Statistics (NCHS))
TimeTuesday 18 July, 09:00 - 10:30
Room

Interviewers are central to survey operations. From qualitative question design and evaluation – cognitive interviewers, focus group moderators, and even expert reviewers – to quantitative survey administration in the field. A large body of literature has identified various ways the identity, behavior, and disposition of interviewers influence the quality of data collected. And, a growing consensus is developing that in both qualitative and quantitative aspects, interviewers should be understood not merely as mindless, faceless data collection machines, but as researchers that contribute to the research process. Indeed, the consequences of ignoring interviewers’ humanity and research capabilities may be particularly important for data quality, as research on interviewer effects has already shown.

This panel invites contributions addressing either qualitative pre-fielding or quantitative survey administration that consider:
1. Whether and how best interviewers can be incorporated into the research team;
2. How interviewers affect the quality of data collected (positively or negatively);
3. How interviewers navigate complex issues, for example, sensitive survey topics or respondents unaccustomed to the survey process;
4. Considerations of the “interviewer” in a self-administered context.

Keywords: interviewers, interviewer effects, cognitive interviewing, data quality, sensitive topics

Papers

Comparing Interviewer Behavior to Interviewing Instructions: Interviewer Errors Undermine Accurate Measurement in the American National Election Study

Mr Hector Santa Cruz (Stanford University)
Dr Matthew Berent (Matt Berent Consulting)
Dr Jon Krosnick (Stanford University) - Presenting Author
Dr Arthur Lupia (University of Michigan)
Dr Alexander Tahk (University of Wisconsin)

Using survey data to estimate population parameters and identify differences between population subgroups requires that variables of interest are measured in the same way for all respondents. Face-to-face interviews conducted in respondents’ homes may be especially prone to measurement inconsistencies due to a traditional absence of real-time monitoring and correction by supervisors. We explored the prevalence of measurement inconsistencies in face-to-face interviews using data from the 2008 American National Elections Study Time Series Study. Transcripts of interviewers asking, and respondents answering, open-ended questions during interviews conducted in respondents’ homes revealed that rates of interviewer deviations from question administration instructions ranged from 14.80% for some open-ended questions to 66.94% for others. The likely consequences of this are suppression of answers to subjective questions respondents would have given if interviewers had not deviated, and a net reduction in the percent of respondents giving objectively correct answers to factual questions. We also identified respondent characteristics associated with deviations from question administration instructions. The results suggest caution when estimating population characteristics from face-to-face survey data, and they highlight the need for enhanced interviewer training to increase question administration consistency.


Training Cognitive Interviewers to Be Members of the Research Team

Dr Alisú Schoua-Glusberg (Research Support Services Inc.) - Presenting Author

The literature describes two different possible roles for cognitive testing interviewers. Willis and Artino (2013) and Miller et al. (2014) discuss how this varies depending on the cognitive testing methodology. In think-alouds, the interviewer barely guides the respondent in expressing their thoughts while responding and may use scripted probing administered to all respondents identically. With the probing technique, interviewers elicit narratives from respondents, that provide context from their lives to their answers to the survey questions, revealing how each question was interpreted and whether the response given matches that reality.
Unquestionably, most survey interviewers can be trained on the first type, where probing is fully scripted and spontaneous probing not encouraged. Such approach may be best in projects with large numbers of interviews, in situations with no trained cognitive interviewers and limited training resources. The second approach yields richer results because there is no need to identify possible error a priori, thus reducing the effects of researcher bias. This approach aims to elicit patterns of interpretation and, through it, uncover response error wherever present.
However, most studies cannot afford the time or cost of training survey interviewers as good qualitative interviewers. Interviewers must understand testing goals for questions, how to probe on narrative to make sure the respondent’s reality supports their choice of answer, learn to ascertain when they have probed enough and when they need additional information. These are analyst tasks that have to be exercised on the fly during interviews. When writing interview notes, they need to distinguish what to include and what is not relevant.
This presentation will focus on the cognitive interviewer’s role as researcher and how survey field interviewers can be trained to be first line researcher-analysts using the probing technique. It will include a discussion of how best to do


Interviewer Understandings of Rapport

Dr Mariel Leonard (DIW-Berlin) - Presenting Author

Rapport is considered essential to the success of interviews, particularly those that are long or involve sensitive questions. Literature on survey methods – from Standard Survey Interviewing to in-house training manuals – emphasize the absolute necessity of establishing rapport with survey respondents. Yet, despite its frequent appearance in the literature, "rapport" is a term that is often left undefined, resulting in unclear standards for training and field monitoring.

In this paper, I consider how interviewers themselves define rapport within their work. I analyze data from in-depth interviews collected as part of the Interviewer Quality of Life Study. I find that interviewers are able to provide cogent definitions of rapport that may provide a strong foundations for improving interviewer training as well as monitoring of rapport-building in the field.


What Took You so Long? The Role of Interviewer Experience as a Determinant of Interview Duration

Mr Söhnke Bergmann (Austrian Central Bank (OeNB)) - Presenting Author
Mr Tobias Schmidt (German Central Bank (Deutsche Bundesbank))

This study examines the influence of interviewer experience on the duration of survey interviews, with a particular focus on survey-specific experience. Using para-data from a large-scale panel survey on household finances in Germany (“Panel on Household Finances (PHF)”) we are the first to investigate whether interviewers’ familiarity with a survey affects interview speed in panel waves. For this tri-annual survey, we are able to link interviewers across waves using a unique interviewer-ID. We are thus able to observe the development of the speed with which they conduct interviews not only within one wave but across waves. Our findings indicate that interviewers with more experience conducting the PHF-survey complete interviews significantly faster than those who do not have any experience or less experience with the PHF survey. Additionally, we confirm previous findings that (1) interviewers become faster within a wave of the survey and (2) interviewers with greater overall experience, i.e. years spend with the survey company, are quicker. These results are important because interview duration is often used as a proxy measure for data quality ((Yan and Tourangeau, 2008; Loosveldt and Beullens, 2013). Therefore, survey designers should consider these findings when evaluating data quality.


Interviewer Background and Its Influence in Interviews with Muslims

Ms Katrin Pfündel (Federal Office for Migration and Refugees) - Presenting Author
Dr Anja Stichs (Federal Office for Migration and Refugees)
Dr Amrei Maddox (Federal Office for Migration and Refugees)

Muslim immigrants are a particular hard-to-reach group in survey research, often facing challenges related to trust, language barriers and cultural differences, exacerbated by experiences of social exclusion and discrimination within this group. Co-ethnic interviewers might be able to surmount these hurdles. This study, thus, investigates the effect of interviewer characteristics, particularly the use of foreign interviewers, on survey responses among Muslim immigrants.
In our article, we present the design of the Muslim Life in Germany 2020 study as an example of best practice for generating reliable data on Muslims. The study involved more than 4,500 face-to-face interviews with immigrants and their descendants from 23 Muslim-majority countries, including 3,472 Muslims. In view of the aforementioned obstacles to interviewing Muslims in Germany, alongside native interviewers also Syrian interviewers were sent into the field. As a result, 43% of interviews with Muslims were conducted by these Arabic-speaking interviewers.
In terms of investigating interviewer effects, we apply bivariate and multivariate analyses to the resulting data to examine how the nationality, ethnic background and language of interviewers may influence the willingness of immigrants and their descendants from Muslim-majority countries to take part in surveys and their response behaviour to culturally sensitive questions about religion and social cohesion. We thus investigate the potential of interviewers who share similar cultural or religious backgrounds with respondents to increase participation rates and reduce the share of missing values.


Asking Sensitive Questions: Interviewers' Feedback on Administering the Current Population Survey Disability Supplement

Dr Tywanquila Walker (U.S. Bureau of Labor Statistics) - Presenting Author
Dr Robin Kaplan (U.S. Bureau of Labor Statistics)

The Current Population Survey (CPS) is a nationally representative interviewer-administered survey of households providing information on labor force activity in the United States. The demographic portion includes a series of 6 yes/no questions that ask whether a person has serious difficulties with hearing, vision, mobility, cognition, or mental health. Feedback from stakeholders and research has shown the 6 questions may not identify all individuals who have difficulties affecting their ability to work, and there is a concern that the data undercount people with disabilities in the working-age population. In response to these concerns, and to improve statistics on the number of people with health conditions and difficulties that limit the kind or amount of paid work they can do, the 2024 CPS Disability Supplement was added to the main CPS. The supplement asked questions to 1) capture disabilities that may be missed with the existing 6 questions; 2) collect the type of disability; and 3) understand the challenges respondents face in obtaining and retaining gainful employment.

Given the sensitive nature of the topic, interviewers were asked to provide feedback on their experience administering the supplement. In two focus groups, 25 interviewers shared what it was like to conduct the supplement and provided insights into how the questions performed in the field. They candidly discussed their initial reservations about asking sensitive questions, how they navigated respondent refusals, probes they used to alleviate respondent confusion, and recommendations for improving the supplement for future iterations. We will discuss general findings and provide detailed qualitative analysis on administering questions related to type of disability, the kind or amount of paid work respondents can do, and job challenges.


What can Performance Indicators tell us About Face-to-Face Interviewers and how can we use Them to Evaluate Quality?

Dr Christian Haag (LIfBi - Leibniz Institute for Educational Trajectories) - Presenting Author
Ms Martyna Flis (formerly LIfBi - Leibniz Institute for Educational Trajectories)
Dr Jutta von Maurice (LIfBi - Leibniz Institute for Educational Trajectories)

Interviewers are essential actors in face-to-face surveys because they have considerable influence on respondent contact and cooperation, and on data accuracy. While their individuality and deviation from standard protocols might be welcome in the recruitment process, the actual data collection often calls for strict standardization.
Personal interviewing requires a range of specific activities and corresponding skills from the interviewer. By looking at the domains of (1) contact and cooperation and (2) accuracy, we investigate specific tasks that might require partially incompatible skills and that are related with either unit- or item-nonresponse. We propose a set of performance indicators to break down interviewer variance into more specific components and to facilitate further investigation and agency in controlling and optimizing interviewing from a survey-methodological perspective.
We conduct our analyses with data from an advanced panel wave of a sample of representatively drawn adults in Germany (National Educational Panel Study, NEPS). With a sample of 217 interviewers and 7,727 respondents, we isolate the interviewer variance and then conduct hierarchical multilevel logistic and linear models to see which indicators retain considerable shares of interviewer variance even when controlling for variables on the levels of interviewer and respondent.
We find substantial interviewer variance remaining for two out of three contact and cooperation indicators as well as for three out of six accuracy indicators. The results further suggest that the two domains are indeed characterized by distinct tasks and skillsets and that there are different types of interviewers with varying performance levels. With our results we aim to contribute to discussions on how to cooperate with interviewers as experts and how to value their contributions while demanding specific behaviors and performance.


Understanding the Variability in Post-Survey Interviewer Observations in a National Panel Survey: Evidence from Health and Retirement Study

Miss Chendi Zhao (University of Michigan, Ann Arbor) - Presenting Author
Dr Brady T. West (University of Michigan, Ann Arbor)
Mr Abdelaziz Adawe (University of Michigan, Ann Arbor)

Post-survey interviewer observations provide valuable insights into the survey response process, yet substantial variability in these observations among both respondents and interviewers raises concerns about their reliability. While previous studies have identified this variability and demonstrated that interviewers contribute to this variance, and also shown that these observations are systematically linked to survey and respondent characteristics, few have explored this issue longitudinally. This study addresses that gap by investigating the sources of variability in interviewer observations using data from 11 post-survey assessments collected across three waves (2016, 2018, 2020) of the Health and Retirement Study (HRS), a national U.S. survey focusing on individuals aged 51 and older.
A multilevel analysis was conducted with time, respondent, and interviewer as three levels. Key predictors included respondent demographics, interview mode, and duration. Three models were fitted: an unconditional model, a model including the fixed effects of all predictors and their interaction with time, and a model that only includes all significant terms from the second model.
The first part of the analysis focused on comparing the decomposition of variance in each observation across levels between the unconditional and subsequent models. The results indicate that interviewers account for a notable portion of variance over time in certain observations, compared to respondents. In the second part of analysis, the full model revealed significant fixed effects related to interview-specific characteristics and respondent demographics. Specifically, longer interviews and Computer-Assisted Telephone Interviewing were associated with more negative ratings compared to shorter interviews and Computer-Assisted Personal Interviewing. Furthermore, older respondents, females, racial minorities, and individuals with lower education levels tended to receive poorer ratings, resulting in lower overall assessments of interview quality. These findings emphasize the need for standardized interviewer training and suggest potential improvements when utilizing interviewer observations to understand data quality.


Gatekeeping at the survey interview? The impact of (non-)compliance with the survey protocol on women's rights reports in the Middle East and North Africa

Dr Kathrin Thomas (University of Aberdeen) - Presenting Author
Dr Isabella Kasselstrand (University of Aberdeen)

Compliance with the survey protocol is essential for the quality of survey estimates in interviewer-administered public opinion surveys. Interviewer characteristics and deviation from set rules, e.g., by allowing third parties or disregarding other regulations to ensure privacy, anonymity, and standardisation, can be harmful. While the effects of non-compliance, have already been studied and possible solutions to tackle influences have been proposed, our paper addresses the problem in a region that has only observed advances in survey infrastructures over the past decades and whose countries remain at most hybrid democracies: The Middle East and North Africa. We employ comparative survey data collected by Arab Barometer (2019 / 2019) to study diversity of interviewers and compliance with the survey protocol as well as the impact of these on reports of women’s rights in the region as well as their impacts on self-reports on women’s rights. Paradata collected after each interview allows us to consider interviewer characteristics and compliance with the protocol. We find some variation regarding compliance with the standardised survey protocol across contexts as well as effect on reporting of women’s rights by interviewer characteristics and the presence and type of bystanders. This has implications for survey data collections and analysis: Especially for culturally sensitive items, surveyors may wish to better train interviewers to comply with the anonymity and privacy function of standardised survey interviews; data providers might want to collect, provide, and report relevant information about the survey process to allow users to control for potential effects.


Mixed Method Research of Field Representative Feedback on the National Health Interview Survey Follow-up Health Study

Dr Rodney Terry (U.S. Census Bureau) - Presenting Author

For the U.S. National Center for Health Statistics, the U.S. Census Bureau conducted focus group and survey research of Field Representatives’ (FR) feedback on their experience asking respondents about the U.S. National Health Interview Survey (NHIS) Follow-up Health Study (FHS). As part of the NHIS interview, FRs described the study and asked a sample of NHIS adult respondents if they were willing to be contacted to schedule an appointment to participate in the FHS. The study protocol consisted of a home health exam where a trained health representative collected physical measurements (e.g., height) and blood and urine for testing. The purpose of the focus group and survey research was to collect feedback from Field Representatives who asked NHIS respondents about the NHIS FHS pilot project to identify strategies that may help increase FHS participation. We conducted five focus groups with 35 FRs, followed by an online survey of 71 FRs. Results showed that the FHS introduction was easy to learn and administer, and that the training and FHS brochure were helpful learning tools for both the FRs and respondents, respectively. However, FRs reported that NHIS respondents said the study was very invasive on the body and time consuming. Further, while the incentive was helpful for gaining the cooperation of many respondents, it was not effective in motivating reluctant respondents and respondents from high income households. To improve the study, we recommended that NCHS investigate several strategies, including increasing the incentive, notifying and educating the respondent about the FHS study earlier in the interview or before the FR’s visit, and offering alternate forms of participation. The results of this research will help inform future iterations of the FHS and provide further insights into how interviewers navigate a complex and sensitive issue.


What types of survey questions are prone to interviewer effects? Evidence based on 31,000 ICCs from 28 countries

Dr Ádám Stefkovics (HUN-REN Centre for Social Sciences) - Presenting Author
Ms Anna Sára Ligeti (HUN-REN Centre for Social Sciences)

Interviewer effects are a common challenge in face-to-face surveys. Understanding the conditions that make interviewer variance more likely to occur is essential in tackling sources of bias. Earlier evidence suggests that certain features of the survey instrument provide more ground for interviewer influence. For instance, attitudinal, sensitive, complex or open-ended questions invite more interviewer variance. In this paper, we aim to validate earlier results, previously derived from single-country studies, by using the large cross-national sample of the European Social Survey. We compare 31,270 intraclass-correlations (ICCs) derived from 1004 survey questions from 28 countries using data from 10 waves of the ESS. The questions were manually coded based on several characteristics. These features of survey questions were then used as predictors of ICCs in multilevel models. The results show that question characteristics account for a significant portion of the variation in ICCs, with certain types, such as attitude and non-factual questions, items appearing later in the survey, and those using showcards, being especially susceptible to interviewer effects. Our findings have important implications for both interviewer training and questionnaire design.


The Importance of Interviewer Feedback on Survey Design: Results and Interviewer Perspectives

Mr Douglas Williams (US Bureau of Labor Statistics) - Presenting Author
Dr Robin Kaplan (US Bureau of Labor Statistics)
Dr Erica Yu (US Bureau of Labor Statistics)

Survey data collectors, or interviewers, are a key component of the survey lifecycle. While surveys conducted via self-administered approaches, such as paper or web, continue to proliferate to mitigate increasing costs, the most critical and complex data collections continue to have a strong reliance on interviewers. This is generally to assist with complex topics, or to negotiate a common understanding of ambiguous responses that interviewers code on the spot to discrete response options. However, interviewers can be a rich and often overlooked resource for understanding the survey process, such as identifying respondent difficulty, poor questions, burden, or frequently misunderstood concepts within the questionnaire. Although interviewers may be capable of navigating such difficulties on their own, such non-standardized procedures can lead to interviewer and mode effects. In the growing era of mixed-mode surveys, relying on interviewers to remedy these issues can be short-sighted. In this paper we discuss our process for collecting qualitative feedback from interviewers collecting data for the Current Population Survey (CPS), the national labor force survey in the United States. The CPS is in the midst of a modernization effort, which will add a web mode in addition to personal visit and telephone collection. This effort began with extensive pretesting that identified potential challenges and break-downs of the cognitive response process for some CPS concepts. We then leveraged these insights to collect additional feedback and a new perspective from a large-scale self-administered survey of CPS interviewers. Information was collected on which questions posed the most respondent difficulty, concepts that required clarification, methods used to overcome difficulty, and approaches to hypothetical vignettes depicting complex interactions. We share some results, focusing on feedback collected from interviewers on how they address interview issues and how such feedback is important to the survey modernization process.


Assessing the Role of the Interviewer: Reflexivity in Cognitive Interviewing Studies

Dr Zachary Smith (National Center for Health Statistics, Centers for Disease Control and Prevention) - Presenting Author

Results of cognitive interviewing studies are used to both improve question design and contextualize survey data for the end user, and study results are usually based on verbatim interview transcripts or interviewer-provided summary notes. Despite this, question evaluation reports seldom consider the role of the interviewer—that is, reflexivity, the critical examination of a researcher’s own role in the generation, analysis, and reporting of data. Use of reflexivity as an analytic criterion can both assist in understanding the research findings and increase the transparency and credibility of qualitative research by clearly denoting the ways in which subjective decisions and unconscious processes guide data collection and analysis. More broadly, while “interviewer effects” are a growing topic of research in quantitative surveys, they are rarely discussed in qualitative cognitive interviewing work, in which the semi-structured role of the interviewer likely plays a more acknowledged role.

This paper draws on three separate cognitive interviewing studies on cannabis use, race/ethnicity, and cognitive decline, conducted by the United States’ National Center for Health Statistics. During the cannabis study, interviewers wrote written reflections covering ways they felt they influenced the direction of the data gathered in each interview, and the author separately conducted in-depth, semi-structured interviews with each member of the interview team. Revised interviewer reflection tasks that incorporated lessons from the cannabis study were used in subsequent studies on race and ethnicity and cognitive decline. This paper presents the initial findings from the cannabis study, challenges encountered in collecting reflexive data from interviewers, and subsequent findings from the race/ethnicity and cognitive decline studies. Through this process, the paper also offers preliminary recommendations of how to communicate the ways interviewers perceived themselves as implicated in the process of data generation through the documentation of study findings.