ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Video interviewing for survey data collection: beyond the pandemic

Session Organisers Mr Tim Hanson (ESS HQ (City St Georges, University of London))
Mr Matt Brown (Centre for Longitudinal Studies, UCL)
Professor Gabriele Durrant (NCRM, University of Southampton)
TimeTuesday 18 July, 09:00 - 10:30
Room

Video interviewing became a more common method for survey data collection during the COVID-19 pandemic, having been used less often before this point. It was often used as an alternative to in-person interviewing during periods when this wasn’t possible. Following the pandemic, there were questions over whether video interviewing would remain a viable and effective mode for surveys – and if so, in which contexts. Our session seeks to shed renewed light on this topic.

We invite submissions from researchers and practitioners who have used video interviewing for quantitative survey data collection. This includes use of video interviewing beyond the pandemic and studies carried out during the pandemic that have future implications for the method. Evidence to date suggests that it is feasible to carry out survey interviews via video platforms (Carr et al., 2023) and that the quality of video interviewing is comparable with in-person interviewing (Endres et al., 2023). However, the current evidence base is quite limited, and more evidence is needed to inform the future of video interviewing.

Submissions on various topics relating video interviewing are welcome. This includes: experimental studies that compare video interviewing with other modes (e.g. based on data quality or measurement); impact of video interviewing on response rates, representativeness and nonresponse; interviewer effects associated with video interviewing; analysis of paradata from video interviews; use of video interviewing in different contexts (e.g. standalone versus complementary mode, for longitudinal versus cross-section studies); practical lessons relating to the administration of video interviews, including development of bespoke platforms; use of video interviewing for complex survey tasks; and use of both live and recorded video interviewing.

This session is being organised in partnership with the Survey Data Collection Methods Collaboration (‘Survey Futures’), funded by the UK Economic and Social Research Council.

Keywords: Video interviewing

Papers

Recruitment and Costs in Large-scale Video Interviews

Mr Andrew Hupp (University of Michigan) - Presenting Author
Dr Lauren Guggenheim (University of Michigan)
Mr David Howell (University of Michigan)
Ms Wen Change (University of Michigan)
Ms Makenna Harrison (University of Michigan)

Interest in live video interviewing has been partly motivated by the possibility of reducing costs relative to in-person (i.e., face-to-face) interviews while retaining some of the benefits of in-person data collection. Both the 2020 and 2024 American National Election Studies (ANES) used live video interviewing as part of a multi-mode data collection. This was done both out of necessity–in 2020–and later to better understand recruitment and cost savings. The 2020 ANES design involved recruiting respondents from a probability-based mail invitation to video interviews, which were conducted both before and after the election with the same respondents. In contrast, the 2024 design included a pre-election interview in person and a post-election video interview.

This paper compares recruitment strategies and respondent characteristics across both studies. It also attempts to disentangle the monetary costs of video interviews compared to in-person interviews in the 2024 study. First, recruitment to video was challenging in both election years, although the reasons varied. However, there do seem to be some groups of respondents who are easier to recruit than others. Previous studies have noted the challenges in understanding per-interview costs because much of the cost of surveys is design-dependent, and separating costs for different design elements can be difficult. However, any potential savings from video relative to in-person likely depend on design decisions that affect effort and costs. We investigate the impact of appointment show rates for the 2024 design to determine where cost savings were realized and where they added additional costs (i.e., extra contact attempts, additional travel, or inefficient staffing). We also briefly consider other design components such as sequential design and reminder strategies.


Increasing sample representativeness by using video interviews as a proxy of face to face interviewing in R11 of ESS in Iceland

Mr Ævar Þórólfsson (Social Science Research Institute of the University of Iceland) - Presenting Author
Mr Árni Bragi Hjaltason (Social Science Research Institute of the University of Iceland)

After a successful use of video interviewing in round 10 of ESS, it was again permitted to use them in round 11. In round 10 in Iceland almost 37% of respondents (n = 333) chose a video call. A detailed comparison of video calls and face-to-face interviews showed that the quality of the interviews from the different modes in Iceland was very compatible, but the use of them though did not improve the representativeness of the respondent group. This paper discusses the use of video calls in the data collection in round 11 of ESS in Iceland from February to June 2024. A random sample of 3104 individuals was drawn from the National Population Register. An advance letter introducing the choice between participating through a face-to-face interview or a video call via a computer was sent by mail. The letter was followed up by a telephone call to schedule an interview of respondents’ choice. After accounting for 182 ineligible individuals in the sample, a net response rate (AAPOR RR1) was 28.8% with 842 completed interviews. Just under 42% of respondents (n = 352) chose a video call, which is a higher percentage compared to round 10. As no COVID-19 restrictions were ongoing in Iceland in round 11 and it was not expected that possible respondents were afraid of interviewers spreading the virus, the data from this round is very well suited to answer questions on what distinguishes respondents who choose a video call, as younger respondents and respondents with a higher degree of education are more likely to choose a video call. This paper examines how this information can be used to increase the representativeness of the respondent group, and also helps to identify groups which still need to be interviewed face-to-face.


Assessing the Impact of Video Interviewing on Survey Measurement and Data Quality: Evidence from an Experimental Study

Dr Marc Asensio (UCL) - Presenting Author
Mr Matt Brown (UCL)
Professor Gabriele Durrant (University of Southampton)
Mr Tim Hanson (University of London)

Amid declining response rates and rising survey costs, identifying reliable and cost-effective data collection methodologies is crucial. Technological advancements and widespread technology adoption offer promising opportunities for this goal. During the COVID-19 pandemic, video interviewing surged when in-person interviewing became unfeasible. In the UK, video interviewing was adopted in studies such as the National Child Development Study, the 1970 British Cohort Study, and the European Social Survey.

The novelty of video interviewing means its impact on measurement and data quality remains underexplored. Although literature on this topic has gradually emerged, experimental evidence comparing data quality between video, web, and in-person surveys is still lacking. This study addresses this gap using data from an experiment where participants (N=1,510) were randomly assigned to complete a survey via video, web, or in-person. We compare various aspects of measurement and data quality across the three modes, including completion time, item nonresponse, satisficing, and social desirability bias.

Much is known about mode differences between web and in-person surveys. Video interviewing shares features with both, and as such our analyses aim to determine whether data collected by video more closely resembles web or in-person data. Understanding the impact of video interviewing on measurement and data quality is vital to assess its potential as a post-pandemic survey data collection mode.


A Quality Comparison of Live Video Interviewing to Web and Face-to-Face using ANES Data

Dr Lauren Guggenheim (University of Michigan) - Presenting Author
Mr Andrew L. Hupp (University of Michigan)
Professor Nicholas A. Valentino (University of Michigan)

Live video interviewing continues to draw interest from researchers and survey organizations as they grapple with sampling and measurement challenges in other modes. One area of concern is whether video interviews produce data quality advantages large enough to warrant the added cost over non-interviewer administrated data. Some early experimental research suggests that the quality of data collected by video is indeed high, even approaching that of in person, face-to-face (FTF) interviews (Conrad et al., 2023; Endres et al., 2022). This research extends this inquiry using two large-scale observational studies: the American National Election Studies (ANES) 2020 and 2024 Time Series studies. We look at a variety of quality measures including breakoffs, item non-response, insincere responding, and length of open-ended responses, in video compared to FTF and online modes.

The 2020 and 2024 studies used different designs, though both deployed probability sampling. The 2020 ANES pushed respondents from mail to an online screener which randomly assigned either a self-administered web questionnaire or a video interview. A post-election survey was completed in the same mode as the pre-election survey. The 2024 study consisted of two parallel samples, FTF and web, recruited simultaneously. FTF sample respondents were invited to participate in the post-election study via video, but some were interviewed in person again. Web respondents completed the study online both times. Differences in the recruitment and methodology of both studies allow for a comparison, under different conditions, of the quality of video, as well as its quality relative to other modes. We mention both advantages and limitations of the designs for understanding data quality, as well as how the results fit within our current understanding of the quality of video as a survey mode.


Using an Appointment Booking Tool for Conducting Live Video Interviews in a Household Panel Study

Mr Jonas Kemmerling (infas - Institute for Applied Social Sciences) - Presenting Author
Mr Thomas Weiß (infas - Institute for Applied Social Sciences)

In contrast to other methods of data collection, video interviewing does not simultaneously act as a method of contacting the target population. Usually, interviewer-administered surveys are conducted in a way that interviews are conducted right at the first contact or an appointment is arranged instead. For video interviews, this approach is not possible since target persons cannot be contacted using a videoconferencing tool. For conducting video interviews in the context of a household panel study (German SOEP Innovation Sample, SOEP-IS), we developed an online appointment booking tool allowing respondents to book appointments for a video interview themselves.
The SOEP-IS sample can be divided into two broader groups: households who are used to being contacted by an interviewer offline (CAPI or CATI, offliners) and households who participated in a web-survey (CAWI, onliners). We exploit this sample structure to investigate whether booking an appointment is determined by panel experience (onliners vs. offliners, number of participations) or sociodemographic factors like age or education. Applying a logistic regression on booking an appointment (yes/no) we can provide insights on which subsamples and populations can be reached with this new approach.
Furthermore, we collect paradata on the usage of the appointment booking tool. Based on the number of logins, the time between invitation and appointment booking, and the access path (mail vs. email) we can provide descriptive information on how the appointment booking tool was used in the context of SOEP-IS and can be used for other surveys and beyond the scope of video interviewing.
At the time of submission of this abstract, fieldwork is still ongoing. However, first results show that there is potential in using the appointment booking tool not only for video interviews but also to simplify appointment-making in other interview modes.


Video-Interviewing as Part of a Multi-Mode Design in Panel Studies: Insights From the Field

Ms Julia Witton (German Institute for Economic Research (DIW Berlin)) - Presenting Author
Dr Carina Cornesse (GESIS - Leibniz Institute for the Social Sciences)
Dr Markus Grabka (German Institute for Economic Research (DIW Berlin))
Professor Sabine Zinn (German Institute for Economic Research (DIW Berlin), Humboldt University of Berlin (HUB))

Based on previous research from other countries, computer-assisted live video interviewing (CALVI) can be expected to be a useful addition to existing mixed-mode survey designs.
To assess CALVI’s feasibility in a German household panel survey, we included a hypothetical inquiry on respondents’ willingness to participate in video interviews during the 2022 data collection wave of the German Socio-Economic Panel (SOEP). Of 22,549 respondents with valid answers, 39% indicated a willingness to participate in CALVI.
Based on these findings, we pretested CALVI in a separate survey setting. In the pretest, 73 target persons scheduled a video appointment, and 46 completed an interview. All participants consented to the recording of the interview, and 44 of them enabled their cameras. On average, the interviews were 106 minutes long. We find high levels of satisfaction among both respondents (85% positive) and interviewers (94% positive) and little item nonreponse or survey break-off.
Building on the pretest, we implemented CALVI in the ongoing wave of the SOEP Innovation Sample (SOEP-IS), targeting a mix of households interviewed in the previous wave using either CAWI or computer-assisted personal interviewing (CAPI). Our study employs a stratified randomized experimental design, with 50% of the panels’ CAPI households as well as 25% of CAWI households encouraged to switch to CALVI. This design allows us to evaluate which households transition to CALVI based on their data collection mode in the previous wave.
In our presentation, we will share findings from the hypothetical willingness-to-participate inquiry, pretest outcomes, and experimental fieldwork results, including response rates, sample composition, and data completeness. Additionally, we will discuss practical lessons learned and the implications of CALVI for improving the efficiency and quality of panel data collection.


Video Interviewing: Recent Developments and Future Directions

Ms Lena Centeno (Westat) - Presenting Author
Mr Ryan Hubbard (Westat)
Mr Richard Dulaney (Westat)
Mr Jesus Arrue (Westat)
Mr Brad Edwards (Westat)

At the 2023 ESRA conference, we presented on several aspects of Computer Assisted Video Interviewing (CAVI) on the Medical Expenditure Panel Survey – Household Component (MEPS-HC). We detailed the implementation of widespread video interviewing while in the field, analyzed characteristics of video respondents, and examined mode effects in the data. This presentation focuses on developments to date, and particularly on the effect of video interviewing on the field force.

Since 2022, we have trained 477 interviewers on CAVI and conducted more than 16,000 CAVI interviews. Key milestones include the full integration of CAVI in the field management system, the establishment of a dedicated team of 17 CAVI interviewers that have conducted over 1,750 interviews, and the implementation of respondent choice for mode preference, allowing participants to select between in-person and remote (CAVI) interviews.

To address respondent reluctance to host an in-person visit, MEPS has implemented respondent choice, empowering participants to select their preferred interview mode: in-person or remote via CAVI. As we expand the use of CAVI, MEPS-HC is committed to improving processes and systems to make it easier for interviewers and respondents. Our plans include automating specific tasks that interviewers find challenging, as well as integrating an appointment request into our existing systems that would allow household members to initiate the request for a CAVI appointment. CAVI may be tailored in the future to meet the changing demands of resources and waning respondent cooperation. This integrated system will also benefit other surveys by facilitating CAVI implementation and potentially reducing the need for costly in-person data collection while maintaining data quality.


Empirical Evidence of Opportunities and Challenges of Live Video Interviewing Across Seven Major UK Social Surveys

Professor Gabriele Durrant (University of Southampton) - Presenting Author
Dr Sebastian Kocar (University of Queensland)
Dr Matt Brown (University College London)
Dr Tim Hanson (City University )
Dr Carole Sanchez (University College London)
Dr Martin Wood (National Centre for Social Research )
Dr Kate Taylor (National Centre for Social Research )
Dr Maria Tsantani (National Centre for Social Research)
Dr Tom Huskinson (IPSOS)

This paper investigates the use of live video interviewing (VI) across seven UK social surveys that implemented this mode for data collection, including cross-sectional and longitudinal surveys. We provide rich comparative evidence, on which to evaluate the advancements, opportunities and barriers relating to current and future use of VI in the UK.
The specific aims of the paper are to investigate:
1. uptake of and response rates to VI, also in comparison to other modes;
2. the characteristics of VI respondents informing representation bias; and
3. the feasibility of collecting consent, cognitive assessments and sensitive questions via VI.
Ultimately, we aim to evaluate VI's long-term feasibility as a survey data collection mode post-pandemic.

One of the main findings is that VI was used in different ways: either as the only/primary survey mode when in-person data collection was not possible, or as a complementary mode in mixed-mode designs. The results suggest that, if VI were the primary data collection mode, response rates would be notably lower than in alternative modes. Lower response rates in VI could potentially lead to an increase in representation bias. On the other hand, there are encouraging findings, including that once respondents agree to participate via VI, this mode proves to be a suitable approach for collecting consent, cognitive assessments and sensitive questions. This is a key finding since previous research identified limitations of other remote methods for collecting this kind of data - an important component of many studies, especially longitudinal studies. Overall, the evidence from this study suggests that VI, under certain conditions, can be a suitable complementary data collection mode in a mixed-mode survey design. We identify particular feasibility advantages for longitudinal surveys.


How does video interviewing affect measurement and data quality? Evidence from the 1970 British Cohort Study

Dr Sebastian Kocar (Research Fellow at the University of Queensland ) - Presenting Author
Mr Matt Brown (Senior Survey Manager, Centre for Longitudinal Studies, UCL)
Dr Gabriele Durrant (Professor in Social Statistics and Survey Methodology Director of the ESRC National Centre for Research Methods (NCRM))
Professor George Ploubidis (Professor of Population Health & Statistics | Director of 1958NCDS & 1970BCS)
Mrs Carole Sanchez (Survey Manager, Centre for Longitudinal Studies, UCL)

Using live video interviewing to conduct social surveys is relatively new, but interest in this mode accelerated considerably during the COVID-19 pandemic. Video interviewing may offer several benefits for conducting social surveys including cost-efficiency through reduced travel expenses, the ability to more easily reach geographically dispersed participants, greater convenience and scheduling flexibility. When used as an additional mode in a mixed mode survey, video interviewing could potentially enhance response rates and reduce non-response bias.

Video interviewing shares many features with in-person interviewing and a major attraction is that, at least in theory, this should result in minimal differences in measurement effects or data quality between these two modes. However, there is as yet limited evidence as to whether this is really the case.

The 1970 British Cohort Study, a multi-disciplinary longitudinal birth cohort study in the UK, made extensive use of video interviewing in its recently completed Age 51 Survey. Around half of all interviews were conducted via video with the remainder conducted face-to-face, providing an opportunity for comprehensive analysis of any measurement differences between the modes.

To control for non-random (self-)selection into the mode, we use the rich life-history data collected in previous waves along with propensity score weighting as a method for causal inference. Using this approach, we can robustly evaluate the impact of video interviewing in comparison to in-person interviewing on various aspects of measurement and data quality. These include satisficing, social desirability bias, answer order effect, consent to linkage of administrative data, performance in a series of in cognitive assessments and item non-response.

Our findings will provide vital evidence on the impact of video interviewing on measurement and data quality, helping to determine whether video interviewing has a long-term future as a mode.