ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Mode Effects in Practice 2

Session Organisers Dr Richard Silverwood (University College London)
Dr Liam Wright (University College London)
TimeThursday 17 July, 09:00 - 10:30
Room Ruppert 011

Recent years have seen greater mixing of modes, both within and (in longitudinal surveys) between sweeps of data collection. Unaccounted for, mode effects – differential measurement errors due solely to the mode utilised – can be a problem for obtaining accurate and unbiased estimates. For instance, mode effects can induce associations between variables and changes in survey mode between sweeps may bias estimates of change. Investigating and addressing mode effects is therefore an increasingly important area of methodological research.

Mode effects in mixed mode survey data are typically confounded by mode selection. Many methods to account for mode effects – e.g. statistical adjustment and imputation – presuppose that mode selection can be controlled for with available data. This is difficult, however, as selection factors may be unmeasured (especially in cross-sectional studies), measured poorly, subject to mode effects themselves, or unknown to the applied researcher, as this typically lies outside their area of expertise. Determining the level of bias in practice and promoting methods that do not rely on assumptions about mode selection is of fundamental importance given the continued move towards mixed mode designs.

We invite submissions of research that investigates and addresses mode effects, with particular focus on the longitudinal survey setting. Novel methods for overcoming mode selection, including via integrating experimental work or using other non-standard analytic approaches, are of particular interest. We also invite contributions relating to user-friendly implementations of methods to handle mode effects and their communication to non-technical audiences.

Keywords: Mode effects; Mixed modes; Statistical analysis

Papers

A correlative method for examining the sources of variance in mixed-mode surveys: A different population or the survey-mode effect

Mrs Leah Bloy (Hebrew University Business School ) - Presenting Author
Dr Nechumi Malovicki-Yaffe (Tel Aviv university )

Mixed-mode surveys present challenges in distinguishing between selection effects (biases from respondents' survey mode preferences) and mode effects (biases from the administration method). Existing solutions, such as embedded experiments or comparisons between matched samples, provide insights but are resource-intensive, difficult to implement, and limited in addressing selection bias. They often assume equal access to all modes and struggle to disentangle intertwined effects of mode and respondent selection.
This paper introduces a cost-effective method that resolves these issues by incorporating a "selection" variable, which categorizes respondents into three groups: (1) those unique to mode A, (2) those unique to mode B, and (3) those capable of responding in both modes but are randomly assigned by the researchers to one of the two conditions. This classification breaks the full collinearity between selection effects and mode effects, which traditional approaches based on two groups fail to address. Combined with the "survey mode" variable, it enables separation of these biases using multiple regression analysis. These tests identify the independent contributions of each bias source to response variance, clarifying their relative impact.
The method is easy to implement in mixed-mode surveys by including a question about respondents' mode preferences. This facilitates robust analysis that controls for both mode and selection effects. Multiple regression models evaluate whether inconsistencies in responses arise from mode effects, selection effects, or both, and quantify the magnitude of each bias.

By disentangling these effects, the proposed method simplifies bias identification while improving the validity of mixed-mode survey findings. Researchers gain deeper insights into the origins of inconsistencies, enhancing the reliability of their conclusions.


A systematic review of the (quasi-)experimental literature on mode effects

Dr Georgia Tomova (University College London) - Presenting Author
Dr Richard Silverwood (University College London)
Dr Liam Wright (University College London)

Surveys are increasingly adopting mixed-mode methodologies, whereby data are collected by some combination of face-to-face, telephone, web and video. Due to differences in how items are presented and the presence/absence of interviewers, responses can differ systematically between modes, a phenomenon referred to as a mode effect. Unaccounted for, mode effects can introduce bias in analyses of mixed-mode survey data.

Mode effects in mixed-mode survey data are typically confounded by mode selection. Many methods to account for mode effects – e.g. statistical adjustment and imputation – rely on an assumption that selection into mode can be fully controlled for with available data, but this may be unachievable in practice. In an accompanying presentation, we emphasise a promising but underutilised approach, simulation-based sensitivity analysis, that does not assume no unmodelled selection into mode. Performing such sensitivity analyses requires assumptions about the plausible magnitude of potential mode effects. Thankfully, a large number of experimental and re-interview studies have been performed to assess this for different variables in various between-mode comparisons. Unfortunately, results are spread across multiple papers and are not straightforward to extract. We performed a systematic review of the existing literature and combined the results into a searchable database, so that relevant estimates can be more easily identified by researchers wanting to perform sensitivity analyses. We will present the findings from the systematic review and illustrate the use of the resultant database in analyses of mixed-mode data from the Centre for Longitudinal Studies’ British birth cohorts.


An evaluation of different strategies to adjust for mode-specific measurement biases. An application to general population health surveys

Dr Nino Mushkudiani (Statistics Netherlands) - Presenting Author
Dr Kees van Berkel (Statistics Netherlands)
Dr Barry Schouten (Statistics Netherlands)

In a two-stage reinterview study conducted in the Dutch Health Survey in 2022 and 2023, relatively large mode-specific measurement biases were found. Biases were greater for survey questions that are subjective, such as self-perceived health, and/or that require cognitive effort and recall, such as various lifestyle indicators. Biases were modest to negligible for survey questions that were expected in advance to be relatively objective and easy to answer.
The identification of the biases led to the most important follow-up question: how to deal with them in the production of statistics. Over the past decade, several scientists have proposed strategies to account for mode-specific measurement biases. These strategies range from tailoring data collection to adjusting for bias.

In this paper, we apply and compare three strategies that differ widely in terms of cost, implementation complexity, and data processing. The first strategy is mode calibration, where the study mode is added as a weighting variable to calibration process. This is a crude, but cheap approach. The second strategy is an adaptive survey design technique that incorporates additional constraints to ensure measurement equivalence. This strategy has a strong impact on the complexity of data collection. The third strategy is to adjust biases through (periodic) reinterviews of estimates from the point of view of the mean square error. The resulting strategy requires investments and impacts data processing. We illustrate the three approaches for the Dutch Health Survey, but we also discuss implementation for other general population surveys.