ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Optimizing Probability-Based Web Panel Performance 2

Session Organisers Professor Vasja Vehovar (University of Ljubljana, Faculty of Social Sciences)
Dr Gregor Čehovin (University of Ljubljana, Faculty of Social Sciences)
Ms Andreja Praček (University of Ljubljana, Faculty of Social Sciences)
TimeTuesday 15 July, 11:00 - 12:30
Room Ruppert 0.33

In contemporary social science research, driven by the rising costs of traditional survey modes, web surveys have become increasingly prevalent. Due to the high costs of recruiting sample units, panels are frequently employed for web surveys. Probability-based web panels play a particularly important role by selecting participants with a known chance of inclusion, thereby offering a more accurate representation of the population. These panels may also combine web surveys with other modes (e.g., phone or face-to-face) to reach nonrespondents. Panel studies face numerous challenges, including recruiting respondents, establishing and maintaining panels, optimizing data quality and cost-efficiency, minimizing biases, managing mixed survey modes, and retaining respondents across waves while ensuring response quality.

Submissions are invited on methods and strategies related to the improvement and optimization of probability-based panels, focusing on the following processes:

• Assessing and reducing item nonresponse, unit nonresponse, and mitigating the impact of nonresponse bias;
• Investigating the relationship between costs and overall response quality, including careless responding and satisficing;
• Enhancing sampling techniques, incentive strategies, and recruitment methods to improve initial response rates and respondent retention;
• Comparing response quality between web surveys and other modes, as well as across different device types;
• Assessing response quality and nonresponse bias across panel waves;
• Improving questionnaire design and layout to improve response quality.

Keywords: probability-based panels, web surveys, mixed-mode surveys, survey costs, response quality, nonresponse bias, experimental survey research, survey design

Papers

Impacts of Modifiable Survey Experiences on Response Probability To a New Survey in a Probability-Based Web Panel

Dr Haomiao Jin (University of Surrey) - Presenting Author
Mr Harsh Chheda (University of Southern California)
Professor Arie Kapteyn (University of Southern California)

Background:
High response rates are critical for the success of probability-based online panels. Understanding the influence of modifiable survey characteristics—such as frequency, length, and topic—is essential for improving response rates and data quality. This study investigates how these factors affect the probability that panel members respond to a new survey.
Methods:
We analyzed data from a random sample of 850 participants in the Understanding America Study (UAS), a probability-based web panel in the United States. An idiographic approach was used to model individual response processes. A latent Markov chain model incorporated survey frequency over the past year, as well as the length and topic of the last survey as explanatory variables, while controlling for an unobserved panel commitment level. Individual-level effects were estimated using a Monte Carlo–based method and then pooled to obtain overall relationships.
Results:
Our findings indicate that survey frequency in the past year had no significant impact on subsequent response probability. Survey length showed a small, but positive, effect on future responses. Topic-wise, surveys covering socio-demographic and economic/financial content demonstrated no significant influence. In contrast, surveys assessing behaviors and psychology, events and environmental factors, or health-related topics were associated with lower response probabilities. Cognitive tests had a positive effect, suggesting that certain types of survey experiences may engage respondents more effectively.
Discussion:
These results underscore that managing survey length and thoughtfully selecting survey topics may be more effective than simply limiting survey frequency to maintain response rates. Tailoring survey content in line with respondent preferences and experiences could foster sustained engagement in long-term panel participation. By recognizing which topics encourage continued involvement and which deter it, panel administrators can strategically design surveys to strengthen panel engagement and improve response rates over time.


The use of incentive points in a large population-based probablity-based access panel: Who saves and who spends?

Mr Johannes Lemcke (Robert Koch-Institut)
Mr Daniel Grams (Robert Koch-Institut)
Mr Ilter Öztürk (Robert Koch-Institut) - Presenting Author

Background:
In recent years, an increasing number of academic probability-based online panels have been utilising an incentive system in which virtual points are paid for participation in studies. These points can then be redeemed for vouchers or cash transfers, for example. The utilisation of such systems is a strategy employed to enhance participation and engagement. The present study aims to explore the dynamics of saving and spending behaviour among panellists in relation to different determinants.
Methods:
We conducted an analysis using data from about 20.000 online panellists (who received incentive points for interview completion) of the RKI Panel “Health in Germany” a large population-based probability-based access panel. Online panel members accrue incentive points with a value equivalent to 5€. These points can be redeemed for various retailer vouchers. The present analysis examined the redemption behaviour subsequent to the initial two waves. The study employed descriptive analysis and logit regression models to assess the saving and spending behaviours associated with incentive points.
Results:
Preliminary findings indicate that only about 15% of the panellists who received points after completion of the online questionnaire spends those points on vouchers (after two regular waves). The findings indicate a statistically significant relationship between age and the redemption of online vouchers. The younger the panel members, the higher the probability of redeeming incentive points. The analysis further reveals that gender is a salient factor in the redemption of vouchers. In contrast, factors such as education and health status were found to have no significant impact on voucher redemption. A more detailed presentation of the results will be provided (additional survey waves will be included in the analysis), including which type of product categories (e.g. universal voucher or supermarket) are more likely to be used by which subgroups.


Comparing longitudinal nonresponse bias patterns across two German probability-based panel surveys

Mr Julian Diefenbacher (GESIS)
Dr Barbara Felderer (GESIS) - Presenting Author
Professor Jannis Kück (Heinrich Heine Universität Düsseldorf)
Dr Phil-Adrian Klotz (Heinrich Heine Universität Düsseldorf)
Professor Martin Spindler (Universität Hamburg)

Nonresponse poses a threat to surveys, as systematic nonresponse can lead to nonresponse bias and jeopardize population inference. In panel surveys, this problem is exacerbated not only by initial non-response, but also by nonresponse from wave to wave. While much is known about nonresponse bias in panel recruitment, the development of nonresponse bias over the life cycle of the panel is less analyzed.
This study compares nonresponse bias for two probability-based German panel studies, the GESIS Panel and the German Internet Panel. Both panels claim to be representative of the German adult population and are recruited offline on the basis of samples from official registers. In addition, both panels endeavor to include the offline population either through a mixed-mode design or by allowing people with Internet access and a device to participate in the online survey. Both panels include several recruitment cohorts recruited between 2013 and 2021. We ask the following research questions:
1) Does the initial nonresponse bias increase across panel waves, remain constant, or even decrease?
2) Are the results constant across recruitment cohorts and panel studies?
In order to investigate nonresponse bias, R-indicators are estimated for each wave and cohort in both panel studies and compared over time. The propensity model that forms the R-indicator is based on age, gender, education, marital status, employment situation, country of birth, and internet use. The estimation method and the specification of the functional form in which the covariates are included in the model can be crucial, which is why we compare different methods for estimating propensities: logistic regression and random forest models. The latter are considered more flexible in terms of accounting for nonlinearities and interactions between model variables. The presentation concludes with recommendations on the choice of estimation method.


Balancing Costs and Errors in Probability-Based Survey Recruitment: Lessons from a Web Panel Experiment

Ms Andreja Praček (University of Ljubljana) - Presenting Author
Dr Gregor Čehovin (University of Ljubljana)
Dr Vasja Vehovar (University of Ljubljana)

Traditional evaluations of survey recruitment strategies often focus on a single dimension, such as cost minimisation, bias reduction, or maximising response rates. This study adopts a more holistic approach, integrating survey costs and errors to determine the optimal recruitment strategy. The 2024 experiment, conducted within the 1KA probability-based web panel, involved 7,000 participants randomly assigned to eight incentive groups. Incentives included variations of €5/€10 gift cards, conditional, unconditional, and combined offers, with response rates ranging from 16% to 50%. To assess performance, we calculated the costs per unit of accuracy (CUA), which combines mean squared error (MSE) and costs. The MSE incorporates both variance and squared bias, assessed by comparing survey estimates to official data from statistical office and national databases. Findings from 172 variables show that the most effective group varied depending on the estimate topic. Across all variables, however, the €5 unconditional gift card emerged as the most efficient incentive, providing the best balance between cost and accuracy.


Assessing Bias in Survey Estimates: Comparing Probability-Based and Nonprobability Web Panels to Traditional Probability-Based Surveys

Dr Gregor Čehovin (Faculty of Social Sciences, University of Ljubljana) - Presenting Author
Professor Vasja Vehovar (Faculty of Social Sciences, University of Ljubljana)
Ms Andreja Praček (Faculty of Social Sciences, University of Ljubljana)
Mr Luka Štrlekar (Faculty of Social Sciences, University of Ljubljana)
Ms Andrea Ivanovska (Faculty of Social Sciences, University of Ljubljana)

The comparative quality of survey estimates derived from traditional probability-based surveys (TPS) and various web panels remains a substantial methodological concern. This study evaluated potential biases in survey estimates across three modes: TPS, probability-based web panels (PWP), and nonprobability web panels (NWP). The analysis encompassed 700 question items from 10 concurrent surveys conducted in Slovenia, using official statistics from TPS as benchmarks. Relative bias (RB), defined as the difference between panel estimates and external population values, served as the primary metric for evaluating data quality.

The results revealed substantial differences in relative bias (RB) across survey modes and topics. In PWP, 30% of estimates exhibited an RB exceeding 10%, while this increased to 40% in NWP. Topic-specific analysis indicated particularly high bias in measurements related to income and living conditions, with 56% of PWP and 70% of NWP estimates exceeding 10% RB. General opinion items showed comparatively lower bias, with 27% and 35% of estimates exceeding 10% RB in PWP and NWP, respectively.

A consistent pattern emerged where PWP and NWP respondents reported progressively lower levels of happiness, trust, religiosity, economic optimism, and EU support compared to TPS respondents. Systematic differences were particularly evident in responses regarding sensitive social issues, such as immigrant acceptance and perceived climate change responsibility, which were lower across this progression. Conversely, PWP and NWP respondents progressively reported higher levels of political engagement, long-term health issues, and political criticism compared to TPS respondents.

The findings demonstrate that while both web panel types exhibit substantial deviations from TPS benchmarks, the differences between PWP and NWP are relatively modest. These results provide insights into the trade-offs researchers face when choosing between the costly but more accurate TPS and the more cost-effective web panels, particularly when considering specific research topics and resource constraints.


The impact of SMS as a postnotification tool on response rates: evidence from a natural experiment of a Belgian probability panel.

Professor Gert Thielemans (University of Antwerp) - Presenting Author
Professor Amelie Van Pottelberge (Ghent University)

This study examines the effect of adding SMS to email for the second reminder in a
probability-based web panel survey. We use data collected by The Social Study, a Belgian
probability-based panel with over 5,500 panelists. During the first four waves of TSS,
panelists received an invitation email, followed by two email reminders. Starting from the fifth
wave, the second reminder was sent via SMS as well as email. We aim to assess the impact
of this change on response rates, response quality, and nonresponse bias.
Firstly, we will compare response rates. As some panelists (N=1,594) have experienced both
modes of second reminders, we exploit within-respondent variation with fixed effects logistic
models to evaluate the overall likelihood of response based on demographic variables and
reminder type. Additionally, survival analyses on the full panel will be used to compare the
speed of response between the two different modes.
Previous research has shown mixed results regarding the effectiveness of email versus SMS
reminders. For instance, Andreadis (2020) found that SMS reminders can significantly
improve response rates in mobile-friendly surveys. Keding et al. had unclear results for
postnotification. Hansen and Pedersen (2012) reported lower response rates for SMS
compared to email, suggesting that the effectiveness of SMS may depend on the context and
target audience.
Andreadis, I. (2020). Text message (SMS) pre-notifications, invitations and reminders for web
surveys. Survey Methods: Insights from the Field, 1-12.
Keding, A., Brabyn, S., MacPherson, H., Richmond, S. J., & Torgerson, D. J. (2016). Text
message reminders to improve questionnaire response rates. Journal of clinical
epidemiology, 79, 90-95.