ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Responsive and Adaptive Surveys: Are they really addressing current Data Collection challenges?

Session Organisers Dr Dimitri Prandner (Johannes Kepler University of Linz)
Professor Patrick Kutschar (Paracelsus Medical University)
Professor Martin Weichbold (Paris Lodron University of Salzburg)
Mr Christopher Etter (Paris Lodron University of Salzburg)
TimeFriday 18 July, 09:00 - 10:30
Room Ruppert Wit - 0.52

Recent evidence challenges the longstanding reliance on rigid, single-mode surveys. Individual differences in survey participation motivation, preferred survey modes or specific question formats suggest the need for more flexible, participant-tailored approaches. Thus, responsive and adaptive designs (RAD), that allow for the use of various sampling and surveying methods tailored to different populations, survey topics, and data collection contexts gained traction over the last few years.
Methodological research has consistently shown that pre-planned conditional adaptive survey paths and situational, dynamically adjusting data collection procedures can improve both cost efficiency and data quality. It is often argued that using RAD to adapt survey methods in real-time to optimally align design features with respondent characteristics improves measurement quality overall.
However, while RAD can mitigate certain sources of error and bias, it has also been noted that it may introduce additional new ones. Using the total survey error (TSE) framework, RAD-related design trade-offs can impact various error sources from both TSE components: representation (e.g., refusals, nonresponse) and measurement (e.g., interviewer effects, context effects).
We invite theoretical, conceptual, and empirical papers from laboratory and field research (small to large scale) that address the implications of RAD for data quality. Topics of interest include, but are not limited to:
• Tailored contact strategies and survey modes (e.g., integration of RAD and push-to-web approaches, individualized incentives)
• Adaptive changes to design features during the interview (e.g., proxies, mixed mode/methods, instruments, question difficulty, format or layout, visuals and pictures)
• Predictors for and RAD application in certain respondent groups and specific populations (e.g., vulnerable populations)
• The role of advanced technologies in real-time data monitoring and adjustment (e.g., AI-assisted adaptive procedures, machine learning)
• The use of auxiliary data to inform adaptive survey design
• Strategies and

Keywords: responsive and adaptive designs; sampling; response rate; data quality

Papers

Using Responsive Design to Optimise Response and Improve Achieved Sample Profile in Longitudinal Studies of Learners on Technical Education Courses

Mr Simon Moss (National Centre for Social Research (NatCen))
Ms Line Knudsen (National Centre for Social Research (NatCen)) - Presenting Author
Ms Noémie Bourguignon (National Centre for Social Research (NatCen))

Ongoing technical education reforms in England, initiated by the previous government, aim to improve the quality of technical education. The Technical Education Learners’ Survey (‘Tech Ed’) is designed to monitor the impact of these reforms. In order to maximise the use of limited budget for telephone fieldwork, cases were prioritised on the basis of modelled likelihood of responding online. Implementing responsive design improved the achieved sample profile and response rates among sub-groups of the target population.

The initial waves of the ‘Tech Ed’ Study followed up with different cohorts of learners in multiple waves of longitudinal data collection using a ‘web-first’ approach, with a series of reminders sent to prompt self-completion. Follow-up Computer Assisted Telephone Interviewing (CATI) was then used to increase response rates.

To prioritise cases for follow-up telephone interviewing, following the start of fieldwork, unproductive cases were assigned into batches based on modelled likelihood of responding online. Final variables in the model include sex, age, ethnicity, deprivation rank and additional auxiliary data variables from the National Pupil Database (NPD). Cases were ordered from lowest predicted productivity to highest before being contacted in priority order by telephone interviewers. In taking this approach, interviewer effort and resource could be maximised with the added benefit of achieving improved study outcomes in terms of response and sample quality.

With an emphasis on response, the presentation will outline and discuss the approach taken to prioritise cases for follow-up telephone interviewing, in addition to the impact of the approach on response rates among sub-groups of the target population. The presentation will also reflect on using responsive design to reduce bias and improve sample quality in a cost-effective manner in longitudinal and panel studies using a web-CATI approach.


Customization Options in Online Surveys – a Booster of Survey Data Quality?

Ms Vanessa Schmieja (Forschungszentrum Jülich) - Presenting Author
Dr Hawal Shamon (Forschungszentrum Jülich)
Professor Dirk Temme (Bergische Universität Wuppertal)

Standardized surveys face the challenge that survey participants may have individual preferences and needs that deviate from standard recommendations for questionnaire design. Customization options in surveys are measures to respond to such individual preferences and needs of survey participants. On the one hand, tailoring the questionnaire to the preferences and needs of each survey participant (e.g., Dillman et al. 2014) might promote optimizing behavior during survey participation. On the other hand, adaptations in surveys through customization options are also likely to be associated with greater respondent burden as well as limited comparability of survey participants who have chosen different adaptations. Given the advantages and disadvantages of adaptations, it should be examined to what extent customization options can make an additional contribution to higher data quality beyond the consideration of standard recommendations for questionnaire design.

To investigate the above-mentioned research question, we randomly assigned participants of an online survey conducted in 2024 to four different groups, which differed in terms of questionnaire quality (i.e., high vs. low) as well as customization options (i.e., with vs. without). In the two groups with customization options, survey participants had the opportunity to make changes to the survey themselves. For example, they were offered several additional comment fields and the opportunity to see other people´s responses from previous surveys compared to their own responses. In addition to a regular version, a special version for people with color blindness and for people who use a screen reader were selectable. Analyses will be completed in the coming weeks and presented at the ESRA conference.


Stratification Method for Identifying Subgroups Requiring Different Survey Data Collection Strategies

Mr Yongchao Ma (University of Michigan) - Presenting Author

Increasing workloads and costs in survey data collection have prompted the use of adaptive survey designs in an attempt to find an optimal balance between data quality and costs. Many indicators, such as the partial R-indicators, have been developed to select and prioritize subgroups that lack response. These subgroups are homogeneous with respect to response propensities estimated from known auxiliary data about respondents and nonrespondents; balancing response propensities across subgroups improves the representativeness of overall response. It is straightforward to form subgroups using auxiliary data that are inherently categorical (e.g., gender) or arbitrarily categorical (e.g., age group). This study extends the use of partial R-indicators for selecting subgroups based on continuous auxiliary variables, aiming to find the optimal partitioning of multiple continuous variables that minimizes the within-group variance of the estimated response propensities. Specifically, this study develops a tree-like stratification method to identify subgroups through the following steps: (1) for each continuous auxiliary variable, evaluate all possible split points by calculating the within-group variance of the response propensities after splitting the sample; (2) identify the variable and split point that minimize the within-group variance, and construct a bootstrap confidence interval for the variance; (3) repeat the process for each resulting subgroup (node), retaining only those splits that significantly reduce the within-group variance based on the bootstrap confidence intervals; (4) when no further splits significantly reduce the variance, iteratively collapse subgroups that do not significantly increase the variance to achieve a parsimonious stratification. The proposed method identifies subgroups requiring differential data collection efforts, offering insights into improving representativeness through adaptive survey designs. When multiple data collection strategies are available, this method facilitates targeted allocation of strategies based on subgroup response propensities.


Building the Foundations for Adaptive Survey Design: Identifying Key Predictors of Nonresponse Using Neighbourhood-Level Data

Dr Carla Xena (University of Essex) - Presenting Author
Dr Olena Kaminska (University of Essex)

Adaptive survey designs rely on identifying variables that effectively differentiate between respondents and nonrespondents to target interventions and improve data quality. However, the initial step—determining which variables are most predictive of nonresponse—is often overlooked. This step is critical to streamlining future survey efforts, as it allows researchers to focus only on the most relevant variables for targeting, thereby saving time and resources in survey planning and execution.

In this study, we explore this foundational step using data from a refreshment sample for Wave 14 of Understanding Society, a household longitudinal study in the UK. We link a wide range of neighbourhood-level characteristics from Census and other official statistics to survey sample addresses. By comparing respondents and nonrespondents across these variables, we systematically identify those that significantly differentiate between the two groups. Our analysis includes both face-to-face and push-to-web survey modes to account for mode-specific response patterns. We demonstrate the process of assembling a broad list of potential predictors, testing their association with nonresponse, and narrowing down to the most impactful variables.

Using the identified predictors, we develop adaptive strategies based on 1–3 key variables, and separately on nonresponse models, enabling researchers to design surveys that reduce nonresponse bias through adaptive fieldwork strategies. We demonstrate the process of linking predictors to datasets, analysing nonresponse patterns, identifying the most influential variables, and constructing adaptive groups for future studies. This methodology offers a practical framework for address-based surveys that can leverage neighborhood-level characteristics.