All time references are in CEST
Mode Effects in Practice |
|
Session Organisers | Dr Richard Silverwood (University College London) Dr Liam Wright (University College London) |
Time | Tuesday 18 July, 09:00 - 10:30 |
Room |
Recent years have seen greater mixing of modes, both within and (in longitudinal surveys) between sweeps of data collection. Unaccounted for, mode effects – differential measurement errors due solely to the mode utilised – can be a problem for obtaining accurate and unbiased estimates. For instance, mode effects can induce associations between variables and changes in survey mode between sweeps may bias estimates of change. Investigating and addressing mode effects is therefore an increasingly important area of methodological research.
Mode effects in mixed mode survey data are typically confounded by mode selection. Many methods to account for mode effects – e.g. statistical adjustment and imputation – presuppose that mode selection can be controlled for with available data. This is difficult, however, as selection factors may be unmeasured (especially in cross-sectional studies), measured poorly, subject to mode effects themselves, or unknown to the applied researcher, as this typically lies outside their area of expertise. Determining the level of bias in practice and promoting methods that do not rely on assumptions about mode selection is of fundamental importance given the continued move towards mixed mode designs.
We invite submissions of research that investigates and addresses mode effects, with particular focus on the longitudinal survey setting. Novel methods for overcoming mode selection, including via integrating experimental work or using other non-standard analytic approaches, are of particular interest. We also invite contributions relating to user-friendly implementations of methods to handle mode effects and their communication to non-technical audiences.
Keywords: Mode effects; Mixed modes; Statistical analysis
Dr Liam Wright (Centre for Longitudinal Studies, University College London) - Presenting Author
Dr Georgia Tomova (Centre for Longitudinal Studies, University College London)
Dr Richard Silverwood (Centre for Longitudinal Studies, University College London)
Surveys are increasingly adopting mixed-mode methodologies, whereby data are collected by some combination of face-to-face, telephone, web and video. Due to differences in how items are presented (including the presence or absence of interviewers), responses can differ systematically between modes, a phenomenon referred to as mode effects. Unaccounted for, mode effects can introduce bias in analyses of mixed-mode survey data.
Mode effects in mixed-mode survey data are typically confounded by mode selection since survey participants self-select into their preferred mode. Many methods to account for mode effects – e.g. statistical adjustment and imputation – rely on an assumption that selection into mode can be fully controlled for with available data. The plausibility of this assumption may, however, be questioned as selection factors may be unknown, unmeasured (especially in cross-sectional studies), measured poorly, or subject to mode effects themselves.
In this talk, we place the problem of mode effects within the simple and intuitive causal directed acyclic graph (DAG) framework. Using this framework, we describe the main methods for handling mode effects. We emphasise a promising but underutilised approach, simulation-based sensitivity analysis, that does not assume no unmodelled selection into mode. We demonstrate an application of this sensitivity analysis approach using real-world mixed-mode data from the Centre for Longitudinal Studies’ British birth cohort studies.
Mr Richard Bull (The National Centre for Social Research) - Presenting Author
Dr Aditi Das (The National Centre for Social Research)
Ms Jo d'Ardenne (The National Centre for Social Research)
Mr Zac Perera (The National Centre for Social Research)
Many probability-based surveys, which have historically only been conducted face-to-face, are now transitioning to online or mixed-mode designs. A key issue for such transitions is how to change the survey mode without introducing measurement effects.
In this session we will present findings from the ESRC-funded Survey Futures programme on reducing mode effects. Researchers at the National Centre for Social Research (UK) have conducted a literature review, and consulted with survey practitioners, to develop guidance on how to assess survey questions against a checklist of criteria associated with measurement non-equivalence. The framework considers, amongst other things, question sensitivity, question complexity and visual presentation. This framework includes practical recommendations, for a non-technical audience, on what mitigations can be taken during the questionnaire design phase to reduce the risk of measurement effects occurring.
In this session we present findings from these activities, demonstrate our draft questionnaire review framework and provide information on how practitioners can access this resource in their future studies.
Mrs Leah Bloy (Hebrew University Business School ) - Presenting Author
Dr Nechumi Malovicki-Yaffe (Tel Aviv university )
Mixed-mode surveys present challenges in distinguishing between selection effects (biases from respondents' survey mode preferences) and mode effects (biases from the administration method). Existing solutions, such as embedded experiments or comparisons between matched samples, provide insights but are resource-intensive, difficult to implement, and limited in addressing selection bias. They often assume equal access to all modes and struggle to disentangle intertwined effects of mode and respondent selection.
This paper introduces a cost-effective method that resolves these issues by incorporating a "selection" variable, which categorizes respondents into three groups: (1) those unique to mode A, (2) those unique to mode B, and (3) those capable of responding in both modes but are randomly assigned by the researchers to one of the two conditions. This classification breaks the full collinearity between selection effects and mode effects, which traditional approaches based on two groups fail to address. Combined with the "survey mode" variable, it enables separation of these biases using multiple regression analysis. These tests identify the independent contributions of each bias source to response variance, clarifying their relative impact.
The method is easy to implement in mixed-mode surveys by including a question about respondents' mode preferences. This facilitates robust analysis that controls for both mode and selection effects. Multiple regression models evaluate whether inconsistencies in responses arise from mode effects, selection effects, or both, and quantify the magnitude of each bias.
By disentangling these effects, the proposed method simplifies bias identification while improving the validity of mixed-mode survey findings. Researchers gain deeper insights into the origins of inconsistencies, enhancing the reliability of their conclusions.
Dr Georgia Tomova (University College London) - Presenting Author
Dr Richard Silverwood (University College London)
Dr Liam Wright (University College London)
Surveys are increasingly adopting mixed-mode methodologies, whereby data are collected by some combination of face-to-face, telephone, web and video. Due to differences in how items are presented and the presence/absence of interviewers, responses can differ systematically between modes, a phenomenon referred to as a mode effect. Unaccounted for, mode effects can introduce bias in analyses of mixed-mode survey data.
Mode effects in mixed-mode survey data are typically confounded by mode selection. Many methods to account for mode effects – e.g. statistical adjustment and imputation – rely on an assumption that selection into mode can be fully controlled for with available data, but this may be unachievable in practice. In an accompanying presentation, we emphasise a promising but underutilised approach, simulation-based sensitivity analysis, that does not assume no unmodelled selection into mode. Performing such sensitivity analyses requires assumptions about the plausible magnitude of potential mode effects. Thankfully, a large number of experimental and re-interview studies have been performed to assess this for different variables in various between-mode comparisons. Unfortunately, results are spread across multiple papers and are not straightforward to extract. We performed a systematic review of the existing literature and combined the results into a searchable database, so that relevant estimates can be more easily identified by researchers wanting to perform sensitivity analyses. We will present the findings from the systematic review and illustrate the use of the resultant database in analyses of mixed-mode data from the Centre for Longitudinal Studies’ British birth cohorts.
Dr Szymon Czarnik (Jagiellonian University in Cracow) - Presenting Author
Professor Marcin Kocór (Jagiellonian University in Cracow)
Human Capital Study is a project researching labour market in Poland, part of which is an adult population survey. In the first edition (2010-2014), a total of nearly 90,000 CAPI's were conducted, making it one of the largest labour market surveys in Europe. The present paper utilizes data from the second edition (2017-2021) when in 2021 round, due to COVID-19 pandemic, a mixed-mode was introduced on a sample of 2529 respondents who were offered a choice between three survey modes. In effect, 1386 CAPI's, 896 CATI's, and 247 CAWI's were conducted.
We analyze factors which influence the mode selection, with the prominent role of age, education level and place of residence, as well as mode effects for a variety of survey questions. In the next step, we investigate how the selection bias may affect coefficients in regression models when dependent variables are susceptible to mode effects.
For all three survey modes, we analyze the extent to which straightlining occurs in response patterns. Additionally, for CAPI and CATI, we shed some light on the impact a survey mode may have on interviewer effect.
Dr Nino Mushkudiani (Statistics Netherlands) - Presenting Author
Dr Kees van Berkel (Statistics Netherlands)
Dr Barry Schouten (Statistics Netherlands)
In a two-stage reinterview study conducted in the Dutch Health Survey in 2022 and 2023, relatively large mode-specific measurement biases were found. Biases were greater for survey questions that are subjective, such as self-perceived health, and/or that require cognitive effort and recall, such as various lifestyle indicators. Biases were modest to negligible for survey questions that were expected in advance to be relatively objective and easy to answer.
The identification of the biases led to the most important follow-up question: how to deal with them in the production of statistics. Over the past decade, several scientists have proposed strategies to account for mode-specific measurement biases. These strategies range from tailoring data collection to adjusting for bias.
In this paper, we apply and compare three strategies that differ widely in terms of cost, implementation complexity, and data processing. The first strategy is mode calibration, where the study mode is added as a weighting variable to calibration process. This is a crude, but cheap approach. The second strategy is an adaptive survey design technique that incorporates additional constraints to ensure measurement equivalence. This strategy has a strong impact on the complexity of data collection. The third strategy is to adjust biases through (periodic) reinterviews of estimates from the point of view of the mean square error. The resulting strategy requires investments and impacts data processing. We illustrate the three approaches for the Dutch Health Survey, but we also discuss implementation for other general population surveys.