ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Optimizing Probability-Based Web Panel Performance

Session Organisers Professor Vasja Vehovar (University of Ljubljana, Faculty of Social Sciences)
Dr Gregor Čehovin (University of Ljubljana, Faculty of Social Sciences)
Ms Andreja Praček (University of Ljubljana, Faculty of Social Sciences)
TimeTuesday 15 July, 09:00 - 10:30
Room Ruppert 0.33

In contemporary social science research, driven by the rising costs of traditional survey modes, web surveys have become increasingly prevalent. Due to the high costs of recruiting sample units, panels are frequently employed for web surveys. Probability-based web panels play a particularly important role by selecting participants with a known chance of inclusion, thereby offering a more accurate representation of the population. These panels may also combine web surveys with other modes (e.g., phone or face-to-face) to reach nonrespondents. Panel studies face numerous challenges, including recruiting respondents, establishing and maintaining panels, optimizing data quality and cost-efficiency, minimizing biases, managing mixed survey modes, and retaining respondents across waves while ensuring response quality.

Submissions are invited on methods and strategies related to the improvement and optimization of probability-based panels, focusing on the following processes:

• Assessing and reducing item nonresponse, unit nonresponse, and mitigating the impact of nonresponse bias;
• Investigating the relationship between costs and overall response quality, including careless responding and satisficing;
• Enhancing sampling techniques, incentive strategies, and recruitment methods to improve initial response rates and respondent retention;
• Comparing response quality between web surveys and other modes, as well as across different device types;
• Assessing response quality and nonresponse bias across panel waves;
• Improving questionnaire design and layout to improve response quality.

Keywords: probability-based panels, web surveys, mixed-mode surveys, survey costs, response quality, nonresponse bias, experimental survey research, survey design

Papers

Building a Probability-based Online Panel in the Czech Republic: Experience and Challenges Encountered to Date

Ms Paulina Tabery (Public Opinion Research Centre, Institute of Sociology of the Czech Academy of Sciences) - Presenting Author
Mr Matous Pilnacek (Public Opinion Research Centre, Institute of Sociology of the Czech Academy of Sciences)
Mr Martin Spurny (Public Opinion Research Centre, Institute of Sociology of the Czech Academy of Sciences)

Recent years have witnessed a transition in survey mode in the Czech Republic, a phenomenon that has also been observed in other countries. While the ability to reach the target population was already undergoing gradual change, the onset of the Covid-19 pandemic accelerated and intensified this trend. The Public Opinion Research Centre, situated within the Institute of Sociology of the Czech Academy of Sciences, has been conducting regular surveys of Czech citizens on political, economic and social issues on a 10-times per year basis for the past three decades. These surveys have been conducted in person, but in order to continue this academic project, known as "Our Society", the Centre has recently established a probability-based online panel.
The aim of this paper is twofold: firstly, to present the process of establishing the panel and provide information on the methodological and practical choices made, such as the design of the sampling frame, respondent selection procedures, feedback from the field, and a decision on the data weighting procedure to ensure the representativeness of the sample. Secondly, we will share our experience with the panel's functioning to date, including the attrition rates, and how the panel is being updated with new respondents. The presentation will also address the issue of item and unit nonresponse, and conclude with a comparative analysis of the survey results from this probability-based panel with those from other population-based probability surveys currently conducted within the country and on other opt-in online panels.


Relative Biases in Probability-Based Web Survey Panels: Research Synthesis

Miss Andrea Ivanovska (Faculty of Social Sciences, University of Ljubljana) - Presenting Author
Professor Vasja Vehovar (Faculty of Social Sciences, University of Ljubljana)

Probability-based web panels are increasingly recognized as valuable and cost-effective data sources in modern survey research. However, concerns persist regarding the quality of the estimates they produce. In this context, we first conducted a systematic literature review, where we found 49 publications that assessed the accuracy of estimates derived from probability-based web panels. Most of the evaluated question items in this literature were related to living conditions and background variables (i.e., socio-demographics). Items were measured on nominal, ordinal, and interval scales. A considerable number of items could also be classified as sensitive (e.g., health-related). Next, we evaluated the corresponding relative bias (RB), i.e., the relative difference between the estimate from the panel and the corresponding population value (provided externally). As an indicator of data quality, the RB then served as the key evaluation criterion. Following a preliminary investigation, we identified 1,500 items from 34 studies, for which we were able to calculate the corresponding RB. The findings showed a median RB of 14% across all items. Employing a mixed-effects model to explore predictors of RB, we found that estimates related to family, living conditions, and respondents’ background exhibit RBs approximately 20% lower than average. By contrast, items addressing national politics displayed a 61% higher RB. Items measured on ordinal or interval scales had a 34% higher RB than nominally scaled items, and each level (out of three) of sensitivity increased RB by 10%. These findings suggest that attention to the domain, sensitivity, and measurement scales is needed to account for biases and to enhance the utility of probability-based web panels for robust data collection. The analysis provides preliminary insights and points to areas for further research, particularly in refined coding and implementation of meta-analytic models.


Understanding Participant Burden in CAWI Surveys through Paradata: Insights from the Panel 'Health in Germany'

Mr Tim Kuttig (Robert Koch Institute) - Presenting Author
Mr Johannes Lemcke (Robert Koch Institute)
Mr Stefan Albrecht (Robert Koch Institute)
Mr Matthias Wetzstein (Robert Koch Institute)

Background
The Robert Koch Institute set up a probability-based panel infrastructure focused on public health research (‘Health in Germany’). While participation via paper-based questionnaire is possible, the ubiquity of internet-ready devices and the implemented push-to-web strategy have made online participation (CAWI) predominant. This provides us with valuable data on the participants’ behaviour. To enhance participant retention and mitigate dropout, it is crucial to understand the factors contributing to participant burden. Focusing on the CAWI mode enables us to analyze paradata, which offers insights into response behaviour, data quality, and opportunities for improving questionnaire design. Therefore, in this presentation we analyse response time and dropouts as significant indicators of response burden.

Methodology
We analysed screen times and dropouts as proxies for participant burden from four online questionnaires that were administered to more than 36.000 registered panelists. For each respondent, we recorded screen time per question and the device used, while integrating demographic information (age, sex, education level) and detailed questionnaire metrics (length of question text, number of answer options, complexity of grid questions, among others).

Results
Preliminary findings indicate that age significantly affects response times, with older participants taking longer to respond on average. Additionally, longer question texts and a greater number of answer options correlate with increased screen times, while higher education levels are associated with quicker responses. Notably, device type does not show significant differences in response times. The availability of such detailed information and the longitudinal nature of the panel will also enable us to track changes in participant behaviour over time (e.g. panel conditioning) and analyse more complex associations between questionnaire characteristics and response times in the future. Furthermore, this research improves our ability to more accurately predict the actual burden of participation when designing upcoming questionnaires within the panel.


The Rolling Cross-Section Panel Design: Causal Inference for Expected and Unexpected Events

Mr Cristóbal Moya (DIW Berlin / Bielefeld University) - Presenting Author
Dr Monica Gerber (Universidad Diego Portales)

This article proposes an innovative approach for panel surveys by developing a rolling cross-section panel design (RCSP). The RCSP design randomly assigns participants to complete surveys at different time points, enabling to draw causal inferences on the effect of expected and unexpected events that occur during panel waves. It also contributes to identifying the potential effect of events between waves.

While rolling cross-section designs have been primarily applied in cross-sectional studies, we show how the design can be extended to panel study designs. The approach relies on the randomization of participants within waves, which creates equivalent groups between any time points within waves. Moreover, balancing participants with different profiles optimizes statistical power for potential effects from events within and between waves.

Our article describes the RCPS by explaining its conceptual foundations, illustrating it with a study case on police legitimacy in Chile, and showing its properties with simulations based on different scenarios of sample size, attrition, and auxiliary information.

We conclude that the RCSP design contributes a promising tool for panel surveys that is especially suitable for studies in online modes. It can enhance studies with outcomes potentially interacting with unexpected events and provide a sound method to asses how expected events may influence the study outcomes. We also discuss the implementation challenges of this study design.


Optimizing Probability-Based Panel Recruitment and Mixed-Mode Data Collection in the Netherlands: A Comparative Study of CATI and SMS Approaches

Mr Carsten Broich (Sample Solutions BV) - Presenting Author
Ms Nadica Stankovikj (Sample Solutions BV)
Mr Clark Letterman (Gallup)
Mr Rajesh Srinivasan (Gallup)
Ms Julie Zeplin (Gallup)

This study investigates innovative methodologies for recruiting respondents into a probability-based online panel in the Netherlands, utilizing two distinct recruitment approaches: piggy-backing CATI samples from Random Digit Dialing (RDD) and direct SMS-CATI recruitment. A follow-up survey was conducted to compare response behavior and potential biases across recruitment and interview modalities.

The survey was conducted with four cohorts: (1) 150 respondents recruited via CATI and interviewed via CATI, (2) 150 CATI-recruited respondents interviewed online, (3) 150 respondents recruited via SMS/CATI and interviewed via CATI, and (4) 150 SMS/CATI-recruited respondents interviewed online. The study examines differences in completion rates, mode-specific response patterns, and potential attitudinal biases introduced by recruitment and interview modes.

The findings provide insights into recruitment effectiveness in the Dutch context, revealing variations in completion rates, with online interviews showing higher efficiency but potentially introducing attitudinal biases. Mode bias between CATI and online responses is analyzed, focusing on differences in respondent engagement and data quality. These results contribute to understanding optimal recruitment and interview strategies for probability-based panels in the Netherlands and offer guidance on reducing mode-specific biases to enhance representativeness. The implications for survey methodology and panel management in Europe market will be discussed.


Analyzing the causal effect of survey burden on nonresponse in probability-based online panels among new panel respondents

Professor Arie Kapteyn (Center for Economic and Social Research, University of Southern California ) - Presenting Author
Mr Htay-Wah Saw (Michigan Program in Survey and Data Science, University of Michigan-Ann Arbor )
Professor Marco Angrisani (Center for Economic and Social Research, University of Southern California )

We present the results of a randomized controlled trial (RCT) that evaluated the causal effect of survey burden on nonresponse in a probability-based online panel. The experiment was implemented within the Understanding America Study (UAS), a probability-based online panel representative of the U.S. adult population. We recruited 2,000 new participants for this experiment and randomly assigned half of the participants to a low survey burden condition (n=1,000) and the remaining half to a high survey burden condition (n=1,000). In the low burden condition, participants received one survey invitation every four weeks, whereas in the high burden condition, participants received one survey invitation every two weeks. The only difference between the two conditions is the frequency of survey invitations, with other design features such as survey topics and questionnaire length remaining the same in both conditions. The RCT began in February 2024 and will continue until the end of December 2024. We found that new panelists on a more frequent survey schedule had higher response rates than those on a less frequent schedule. Subgroup analyses revealed the largest treatment effects among high-education and high-income groups, and respondents who are currently working, with no effects found among low-education and low-income groups, and respondents who are not currently working. Our findings suggested the treatment effects were mainly driven by engagement rather than incentive effects linked to receiving a more frequent survey schedule. We discuss the theoretical and practical implications of our findings for improving panel management and retention practices in future studies.