ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Innovations in Adaptive Survey Designs

Session Organisers Professor Christoph Kern (LMU Munich)
Professor Tobias Gummer (GESIS)
Dr Bernd Weiß (GESIS)
Ms Saskia Bartholomäus (GESIS)
Mr John Collins (University of Mannheim)
TimeTuesday 18 July, 09:00 - 10:30
Room

Selective participation, low response rates, and decreasing survey engagement threaten the validity of results drawn from survey samples. Against this background, adaptive survey designs are increasingly used to mitigate survey errors in an effort to increase data quality. This design paradigm acknowledges that populations are heterogeneous and, consequently, participation and answering processes differ between population strata. The basic idea behind adaptive survey designs is to vary data collection protocols across these strata to improve key performance indicators (e.g., indicators concerning participation, sample composition, data quality, survey costs). Nonetheless, implementing adaptive designs is challenging as their success critically depends on both correctly identifying different risk groups for treatment and designing and allocating effective treatments.

Modern predictive modeling techniques, carefully designed treatments, and an efficient, tailored allocation of treatments have the potential to innovate the application of adaptive survey designs and improve their effectiveness. Examples include, but are not limited to, the use of machine learning models to identify risk groups, the design of tailored interventions based on the (past or inferred) preferences of survey participants, and the application of modern data-driven methods to study heterogeneous effects of the implemented treatments.

We invite submissions that test the application of adaptive survey designs in social science surveys. Topics of interest include:

• How can adaptive survey design help to reduce nonresponse or measurement errors?
• How can adaptive survey design be used to ease operative efforts required to field a survey and lower survey costs?
• How to leverage advances in predictive modeling when designing and evaluating adaptive designs?
• How to design and select treatments for different sample strata?
• How to identify risk groups requiring different treatment?
• Trade-offs between all the above points

Papers

The SHARE Respondent Driven Contact Experiment – Testing Prepaid Incentives and Contact Strategies

Dr Michael Bergmann (SHARE BERLIN Institute)
Dr Arne Bethmann (SHARE BERLIN Institute)
Ms Charlotte Hunsicker (SHARE BERLIN Institute)
Mr Alexander Schumacher (SHARE BERLIN Institute) - Presenting Author
Dr Jenny Olofsson (Umeå University)
Professor Gunnar Malmberg (Umeå University)
Dr Filip Fors Connolly (Umeå University)

Survey response rates are affected not only by respondents’ willingness to participate, but also by the success of initial contact attempts. The SHARE Respondent Driven Contact (SHARE-ReDCon) project integrates principles of adaptive survey design by testing tailored contact strategies aimed at improving response and contact rates for populations aged 50 and older.

Using a randomised 2x2 factorial experiment in the German and Swedish SHARE Wave 10 refreshment samples, the study investigates whether enabling respondent-initiated contact can enhance survey participation. The first experimental factor compares standard invitation letters to an adapted version encouraging respondents to initiate contact themselves (via phone or email). The second factor tests the effectiveness of prepaid monetary incentives by randomly assigning participants to receive such an incentive or not. We assume that combining prepaid incentives with immediate opportunities for reciprocation through respondent-initiated contact will increase the effectiveness of the incentives.

This design acknowledges population heterogeneity by addressing unique engagement challenges among older respondents and compares treatment effects across different operational contexts in Germany and Sweden. The study evaluates key performance indicators, including response rates, cost-efficiency, and data quality, providing evidence-based insights into fieldwork strategies for reducing nonresponse errors. The findings will provide practical guidance on how adaptive strategies can mitigate contact problems and reduce survey errors in social science research. In particular, the study contributes to the literature on incentive effectiveness and respondent engagement by demonstrating how respondent-driven approaches can be incorporated into adaptive survey designs. Furthermore, the research highlights operational challenges and trade-offs involved in implementing such tailored strategies in large-scale surveys. These insights will inform the development of innovative, cost-effective fieldwork procedures that enhance survey participation while maintaining high data quality.


The Impact of Different Adaptive Survey Design Approaches on Data Quality: Evidence from the GESIS Panel

Ms Saskia Bartholomäus (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Tanja Kunz (GESIS - Leibniz Institute for the Social Sciences)
Dr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)

Adaptive Survey Designs (ASD) have the potential to reduce attrition rates and nonresponse biases in panel surveys. However, reducing attrition rates of respondents at high risk of dropping out of a panel could inadvertently reduce overall data quality, as these respondents are more likely to show satisficing behavior and provide less thoughtful and engaged responses. The effort respondents put into answering survey questions depends on their cognitive abilities, motivation to complete the task, and task difficulty . In order to not only reduce attrition rates but also maintain high-quality data, ASD treatments need to increase respondents’ motivation and engagement to avoid the negative consequences that may arise from keeping respondents with high attrition risk in the panel. Unfortunately, there is a lack of research that compares different ASD approaches and their impact on data quality. Our study aims to examine how different ASD approaches affect data quality in panel surveys. We conducted an experiment in the probability-based self-administered mixed-mode GESIS Panel, varying the (i) content and (ii) length of the questionnaire, and (iii) the monetary incentive provided. Respondents were randomly assigned to either the control group or one of three treatment groups. After identifying high-risk groups to be treated, we simulated several data sets using different ASD approaches to assess the consequences for data quality. The data is still being processed , but our results can provide important insights into the unintended effects of the continued participation of high-risk respondents in panel surveys and for the feasibility of ASD in general.


Prediction-based Adaptive Designs for Reducing Attrition Rates and Bias in Panel Surveys

Mr John Collins (University of Mannheim) - Presenting Author
Ms Saskia Bartholomäus (GESIS - Leibniz Institute for the Social Sciences)
Professor Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)
Dr Bernd Weiß (GESIS – Leibniz-Institute for the Social Sciences)
Professor Christoph Kern (Ludwig Maximilian University of Munich)

Nonresponse is a critical issue for data quality in panel surveys. Machine learning (ML) has proven effective in predicting which panelists are at risk of nonresponding, introducing the opportunity to target pre-emptive measures. The next logical step thus is to use ML-based predictions to inform an Adaptive Survey Design (ASD) in which participants are targeted for incentives based on their predicted response propensity. However, how to best utilize ML-based predictions to target interventions and develop an effective ASD remains an open question. Prior research primarily covers assumption-dependent simulations or experimental ASD executions with limited generalizability.

This paper presents a method to combine the results of a field experiment on incentives with ML-based propensity models to ex-post simulate, with minimal assumptions, the possible outcomes of a wide range of ASD strategies. This includes different targeting strategies (predicted high- and/or moderate-risk participants) in combination with different interventions (cash incentive, interesting survey module, shorter survey). We find that targeting only the 15% of lowest-propensity panelists with additional cash incentives, or offering a survey module on the participant’s preferred topic, can reduce overall wave nonresponse rates by 1-2%. Our findings on nonresponse bias are mixed, showing a decrease in bias for some variables but no change or an increase for others. We discuss the causes of these differing outcomes and conclude that enticing more respondents is likely to yield lower bias if the panel is sufficiently diverse with respect to the variable in question. The results of our study guide how ML predictions may be best used to target different interventions in ASDs to improve data quality in panel surveys.


Accounting for mode-specific measurement bias in adaptive mixed-mode survey design

Dr Barry Schouten (Statistics Netherlands) - Presenting Author
Dr Kees van Berkel (Statistics Netherlands)
Dr Nino Mushkudiani (Statistics Netherlands)

In a large-scale re-interview study conducted in the Dutch Health Survey in 2022 and 2023, relatively sizeable mode-specific measurement biases were estimated. The biases were found to vary across respondents’ socio-demographic characteristics.
By default, household/person surveys at Statistics Netherlands assume an adaptive survey design in which survey modes are a key design feature. The identification of the biases, therefore, led to the follow-up question if and how the differential in biases must be accounted for in adaptive survey design. To date, despite some attempts to include measurement error in the framework of adaptive survey designs, targeting still is primarily focused at representation. This is not surprising as measurement error is generally not directly associated to survey costs by survey designers. Furthermore, unlike nonresponse, there is no tradition to intervene in interviews.
In this paper, we focus on survey mode allocation attempting to address both representation and measurement simultaneously. We compare two strategies; one where an additional constraint is added on comparability between subpopulations and one where we minimize incomparability directly. We illustrate the two approaches and discuss the implications for implementation of adaptive survey design.