ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Mode Effects in Practice

Session Organisers Dr Richard Silverwood (University College London)
Dr Liam Wright (University College London)
TimeWednesday 16 July, 16:00 - 17:30
Room Ruppert 114

Recent years have seen greater mixing of modes, both within and (in longitudinal surveys) between sweeps of data collection. Unaccounted for, mode effects – differential measurement errors due solely to the mode utilised – can be a problem for obtaining accurate and unbiased estimates. For instance, mode effects can induce associations between variables and changes in survey mode between sweeps may bias estimates of change. Investigating and addressing mode effects is therefore an increasingly important area of methodological research.

Mode effects in mixed mode survey data are typically confounded by mode selection. Many methods to account for mode effects – e.g. statistical adjustment and imputation – presuppose that mode selection can be controlled for with available data. This is difficult, however, as selection factors may be unmeasured (especially in cross-sectional studies), measured poorly, subject to mode effects themselves, or unknown to the applied researcher, as this typically lies outside their area of expertise. Determining the level of bias in practice and promoting methods that do not rely on assumptions about mode selection is of fundamental importance given the continued move towards mixed mode designs.

We invite submissions of research that investigates and addresses mode effects, with particular focus on the longitudinal survey setting. Novel methods for overcoming mode selection, including via integrating experimental work or using other non-standard analytic approaches, are of particular interest. We also invite contributions relating to user-friendly implementations of methods to handle mode effects and their communication to non-technical audiences.

Keywords: Mode effects; Mixed modes; Statistical analysis

Papers

Viewing mode effects through the lens of causal directed acyclic graphs

Dr Liam Wright (Centre for Longitudinal Studies, University College London) - Presenting Author
Dr Georgia Tomova (Centre for Longitudinal Studies, University College London)
Dr Richard Silverwood (Centre for Longitudinal Studies, University College London)

Surveys are increasingly adopting mixed-mode methodologies, whereby data are collected by some combination of face-to-face, telephone, web and video. Due to differences in how items are presented (including the presence or absence of interviewers), responses can differ systematically between modes, a phenomenon referred to as mode effects. Unaccounted for, mode effects can introduce bias in analyses of mixed-mode survey data.

Mode effects in mixed-mode survey data are typically confounded by mode selection since survey participants self-select into their preferred mode. Many methods to account for mode effects – e.g. statistical adjustment and imputation – rely on an assumption that selection into mode can be fully controlled for with available data. The plausibility of this assumption may, however, be questioned as selection factors may be unknown, unmeasured (especially in cross-sectional studies), measured poorly, or subject to mode effects themselves.

In this talk, we place the problem of mode effects within the simple and intuitive causal directed acyclic graph (DAG) framework. Using this framework, we describe the main methods for handling mode effects. We emphasise a promising but underutilised approach, simulation-based sensitivity analysis, that does not assume no unmodelled selection into mode. We demonstrate an application of this sensitivity analysis approach using real-world mixed-mode data from the Centre for Longitudinal Studies’ British birth cohort studies.


Developing practitioner guidance to reduce the risk of measurement effects occurring in mixed-mode questionnaires

Mr Richard Bull (The National Centre for Social Research) - Presenting Author
Dr Aditi Das (The National Centre for Social Research)
Ms Jo d'Ardenne (The National Centre for Social Research)
Mr Zac Perera (The National Centre for Social Research)

Many probability-based surveys, which have historically only been conducted face-to-face, are now transitioning to online or mixed-mode designs. A key issue for such transitions is how to change the survey mode without introducing measurement effects.

In this session we will present findings from the ESRC-funded Survey Futures programme on reducing mode effects. Researchers at the National Centre for Social Research (UK) have conducted a literature review, and consulted with survey practitioners, to develop guidance on how to assess survey questions against a checklist of criteria associated with measurement non-equivalence. The framework considers, amongst other things, question sensitivity, question complexity and visual presentation. This framework includes practical recommendations, for a non-technical audience, on what mitigations can be taken during the questionnaire design phase to reduce the risk of measurement effects occurring.

In this session we present findings from these activities, demonstrate our draft questionnaire review framework and provide information on how practitioners can access this resource in their future studies.


Mode effects, question sensitivity and self-assessment of skills – results of a methodological experiment in a Polish large-scale social survey

Dr Maja Rynko (SGH Warsaw School of Economics) - Presenting Author
Dr Tomasz Drabowicz (University of Lodz)

Most analyses of mode effects are carried out either on observational data or on experimental data with small or unrepresentative samples of respondents. Also, little is known about mode effects in the context of reported information on skills. The assessment of skills levels is becoming increasingly important, and skills are most often measured by self-assessment, which is much easier (and cheaper) to implement in large-scale surveys than direct assessment. The aim of this study is to investigate mode effects in self-reported literacy and numeracy skills. We use data from a national follow-up survey to the PIAAC, with 5224 respondents representative of the Polish adult population. Respondents were randomly assigned to the mode of the literacy and numeracy self-assessment questionnaire - PAPSI (self-administered) or CAPI (interviewer-administered). We analyse the data using IRT methodology and regression modelling. The results show a significant mode effect which was captured in the randomised controlled experiment conducted. The proportion of respondents who rate their skills as very good is significantly higher among those who responded in the CAPI mode. This may reflect the satisficing or social desirability effect. In addition, the mode effect is higher in the literacy domain than in the numeracy domain, which may indicate a higher level of question sensitivity in the literacy domain.


Mode Effects in Adult Population Survey of Human Capital Study

Dr Szymon Czarnik (Jagiellonian University in Cracow) - Presenting Author
Professor Marcin Kocór (Jagiellonian University in Cracow)

Human Capital Study is a project researching labour market in Poland, part of which is an adult population survey. In the first edition (2010-2014), a total of nearly 90,000 CAPI's were conducted, making it one of the largest labour market surveys in Europe. The present paper utilizes data from the second edition (2017-2021) when in 2021 round, due to COVID-19 pandemic, a mixed-mode was introduced on a sample of 2529 respondents who were offered a choice between three survey modes. In effect, 1386 CAPI's, 896 CATI's, and 247 CAWI's were conducted.

We analyze factors which influence the mode selection, with the prominent role of age, education level and place of residence, as well as mode effects for a variety of survey questions. In the next step, we investigate how the selection bias may affect coefficients in regression models when dependent variables are susceptible to mode effects.
For all three survey modes, we analyze the extent to which straightlining occurs in response patterns. Additionally, for CAPI and CATI, we shed some light on the impact a survey mode may have on interviewer effect.