ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Optimizing Probability-Based Web Panel Performance

Session Organisers Professor Vasja Vehovar (University of Ljubljana, Faculty of Social Sciences)
Dr Gregor Čehovin (University of Ljubljana, Faculty of Social Sciences)
Ms Andreja Praček (University of Ljubljana, Faculty of Social Sciences)
TimeTuesday 18 July, 09:00 - 10:30
Room

In contemporary social science research, driven by the rising costs of traditional survey modes, web surveys have become increasingly prevalent. Due to the high costs of recruiting sample units, panels are frequently employed for web surveys. Probability-based web panels play a particularly important role by selecting participants with a known chance of inclusion, thereby offering a more accurate representation of the population. These panels may also combine web surveys with other modes (e.g., phone or face-to-face) to reach nonrespondents. Panel studies face numerous challenges, including recruiting respondents, establishing and maintaining panels, optimizing data quality and cost-efficiency, minimizing biases, managing mixed survey modes, and retaining respondents across waves while ensuring response quality.

Submissions are invited on methods and strategies related to the improvement and optimization of probability-based panels, focusing on the following processes:

• Assessing and reducing item nonresponse, unit nonresponse, and mitigating the impact of nonresponse bias;
• Investigating the relationship between costs and overall response quality, including careless responding and satisficing;
• Enhancing sampling techniques, incentive strategies, and recruitment methods to improve initial response rates and respondent retention;
• Comparing response quality between web surveys and other modes, as well as across different device types;
• Assessing response quality and nonresponse bias across panel waves;
• Improving questionnaire design and layout to improve response quality.

Keywords: probability-based panels, web surveys, mixed-mode surveys, survey costs, response quality, nonresponse bias, experimental survey research, survey design

Papers

Impacts of Modifiable Survey Experiences on Response Probability To a New Survey in a Probability-Based Web Panel

Dr Haomiao Jin (University of Surrey) - Presenting Author
Mr Harsh Chheda (University of Southern California)
Professor Arie Kapteyn (University of Southern California)

Background:
High response rates are critical for the success of probability-based online panels. Understanding the influence of modifiable survey characteristics—such as frequency, length, and topic—is essential for improving response rates and data quality. This study investigates how these factors affect the probability that panel members respond to a new survey.
Methods:
We analyzed data from a random sample of 850 participants in the Understanding America Study (UAS), a probability-based web panel in the United States. An idiographic approach was used to model individual response processes. A latent Markov chain model incorporated survey frequency over the past year, as well as the length and topic of the last survey as explanatory variables, while controlling for an unobserved panel commitment level. Individual-level effects were estimated using a Monte Carlo–based method and then pooled to obtain overall relationships.
Results:
Our findings indicate that survey frequency in the past year had no significant impact on subsequent response probability. Survey length showed a small, but positive, effect on future responses. Topic-wise, surveys covering socio-demographic and economic/financial content demonstrated no significant influence. In contrast, surveys assessing behaviors and psychology, events and environmental factors, or health-related topics were associated with lower response probabilities. Cognitive tests had a positive effect, suggesting that certain types of survey experiences may engage respondents more effectively.
Discussion:
These results underscore that managing survey length and thoughtfully selecting survey topics may be more effective than simply limiting survey frequency to maintain response rates. Tailoring survey content in line with respondent preferences and experiences could foster sustained engagement in long-term panel participation. By recognizing which topics encourage continued involvement and which deter it, panel administrators can strategically design surveys to strengthen panel engagement and improve response rates over time.


Applying a Panel Member-First Approach on KnowledgePanel: Lessons to Improve Panel Performance

Mr Nick Bertoni (Ipsos Public Affairs) - Presenting Author

KnowledgePanel in the U.S. is an online, probability-based panel born in 1999 that has undergone numerous transformative changes since its inception. The platform has seen significant advancements in areas like recruitment, sampling, weighting, and surveying, creating a highly accurate and trusted online, probability-based data collection vehicle over the last two decades. Federal agencies like the CDC, NIH, FDA and others have relied upon KnowledgePanel for their data collection needs.

Benchmarking against federal point estimates is one common way that researchers can evaluate and quantify the quality of online probability panels. This is an area where KnowledgePanel has excelled. Beyond this tangible measure is an intriguing question: Are there other management strategies that can directly enhance the quality of data collection, even if they are harder to define or measure?

In 2023, the Ipsos KnowledgePanel team set out to rethink panel management by adopting a “panel member-first" design. Being panel-member first means being intentional about managing the panel member experience in a way that promotes and improves member engagement. The tone, cadence, and overall content of messaging was overhauled in an attempt in increase engagement. Some extra rewards along the way can also be a pleasant surprise. The impact of this approach was clearly seen within the first year of implementation, as this presentation will demonstrate. KnowledgePanel saw substantial improvements in panel retention and completion rates in 2023. The panel performance in 2024 has been even better. Changes in methodology, messaging, and incentivization will be presented to demonstrate how treating panel members better can directly lead to better panel performance. These lessons learned can be used to inform panel managers of all varieties.


Comparing longitudinal nonresponse bias patterns across two German probability-based panel surveys

Mr Julian Diefenbacher (GESIS)
Dr Barbara Felderer (GESIS) - Presenting Author
Professor Jannis Kück (Heinrich Heine Universität Düsseldorf)
Dr Phil-Adrian Klotz (Heinrich Heine Universität Düsseldorf)
Professor Martin Spindler (Universität Hamburg)

Nonresponse poses a threat to surveys, as systematic nonresponse can lead to nonresponse bias and jeopardize population inference. In panel surveys, this problem is exacerbated not only by initial non-response, but also by nonresponse from wave to wave. While much is known about nonresponse bias in panel recruitment, the development of nonresponse bias over the life cycle of the panel is less analyzed.
This study compares nonresponse bias for two probability-based German panel studies, the GESIS Panel and the German Internet Panel. Both panels claim to be representative of the German adult population and are recruited offline on the basis of samples from official registers. In addition, both panels endeavor to include the offline population either through a mixed-mode design or by allowing people with Internet access and a device to participate in the online survey. Both panels include several recruitment cohorts recruited between 2013 and 2021. We ask the following research questions:
1) Does the initial nonresponse bias increase across panel waves, remain constant, or even decrease?
2) Are the results constant across recruitment cohorts and panel studies?
In order to investigate nonresponse bias, R-indicators are estimated for each wave and cohort in both panel studies and compared over time. The propensity model that forms the R-indicator is based on age, gender, education, marital status, employment situation, country of birth, and internet use. The estimation method and the specification of the functional form in which the covariates are included in the model can be crucial, which is why we compare different methods for estimating propensities: logistic regression and random forest models. The latter are considered more flexible in terms of accounting for nonlinearities and interactions between model variables. The presentation concludes with recommendations on the choice of estimation method.


Building a Probability-based Online Panel in the Czech Republic: Experience and Challenges Encountered to Date

Ms Paulina Tabery (Public Opinion Research Centre, Institute of Sociology of the Czech Academy of Sciences) - Presenting Author
Mr Matous Pilnacek (Public Opinion Research Centre, Institute of Sociology of the Czech Academy of Sciences)
Mr Martin Spurny (Public Opinion Research Centre, Institute of Sociology of the Czech Academy of Sciences)

Recent years have witnessed a transition in survey mode in the Czech Republic, a phenomenon that has also been observed in other countries. While the ability to reach the target population was already undergoing gradual change, the onset of the Covid-19 pandemic accelerated and intensified this trend. The Public Opinion Research Centre, situated within the Institute of Sociology of the Czech Academy of Sciences, has been conducting regular surveys of Czech citizens on political, economic and social issues on a 10-times per year basis for the past three decades. These surveys have been conducted in person, but in order to continue this academic project, known as "Our Society", the Centre has recently established a probability-based online panel.
The aim of this paper is twofold: firstly, to present the process of establishing the panel and provide information on the methodological and practical choices made, such as the design of the sampling frame, respondent selection procedures, feedback from the field, and a decision on the data weighting procedure to ensure the representativeness of the sample. Secondly, we will share our experience with the panel's functioning to date, including the attrition rates, and how the panel is being updated with new respondents. The presentation will also address the issue of item and unit nonresponse, and conclude with a comparative analysis of the survey results from this probability-based panel with those from other population-based probability surveys currently conducted within the country and on other opt-in online panels.


Relative Biases in Probability-Based Web Survey Panels: Research Synthesis

Miss Andrea Ivanovska (Faculty of Social Sciences, University of Ljubljana) - Presenting Author
Professor Vasja Vehovar (Faculty of Social Sciences, University of Ljubljana)

Probability-based web panels are increasingly recognized as valuable and cost-effective data sources in modern survey research. However, concerns persist regarding the quality of the estimates they produce. In this context, we first conducted a systematic literature review, where we found 49 publications that assessed the accuracy of estimates derived from probability-based web panels. Most of the evaluated question items in this literature were related to living conditions and background variables (i.e., socio-demographics). Items were measured on nominal, ordinal, and interval scales. A considerable number of items could also be classified as sensitive (e.g., health-related). Next, we evaluated the corresponding relative bias (RB), i.e., the relative difference between the estimate from the panel and the corresponding population value (provided externally). As an indicator of data quality, the RB then served as the key evaluation criterion. Following a preliminary investigation, we identified 1,500 items from 34 studies, for which we were able to calculate the corresponding RB. The findings showed a median RB of 14% across all items. Employing a mixed-effects model to explore predictors of RB, we found that estimates related to family, living conditions, and respondents’ background exhibit RBs approximately 20% lower than average. By contrast, items addressing national politics displayed a 61% higher RB. Items measured on ordinal or interval scales had a 34% higher RB than nominally scaled items, and each level (out of three) of sensitivity increased RB by 10%. These findings suggest that attention to the domain, sensitivity, and measurement scales is needed to account for biases and to enhance the utility of probability-based web panels for robust data collection. The analysis provides preliminary insights and points to areas for further research, particularly in refined coding and implementation of meta-analytic models.


Improving the Catalan Citizen Panel through Adaptive Survey Design

Mr Joel Ardiaca (Universitat de Barcelona) - Presenting Author
Professor Jordi Muñoz (Universitat de Barcelona)
Dr Raül Tormos (Centre d'Estudis d'Opinió)

We have initiated a large-scale probabilistic panel project in Catalonia, utilizing both web and paper
survey modes. This approach allows us to systematically study nonresponse biases across various
sociodemographic profiles, providing detailed insights into the factors influencing participation rates.
To improve response rates and mitigate nonresponse biases, we have conducted a series of
recruitment experiments focusing on incentives and reminders. These experiments evaluate the
effectiveness of diverse strategies, such as varying the type and amount of incentives offered and
the frequency of reminders. Leveraging the data collected from these experiments, we have
developed predictive models to estimate nonresponse probabilities based on participants’
characteristics and their previous responses in refreshment samples.
With access to an extensive dataset of nearly 90,000 cases, we utilize machine learning models to
better understand and predict nonresponse behavior. Our focus extends beyond improving
response rates; instead, we prioritize enhancing the representativity of the panel to achieve a more
balanced and accurate sample. We also evaluate data quality to ensure that methodological
innovations lead to more reliable results. Crucially, the Catalan Citizen Panel provides a unique
opportunity to empirically test the effectiveness of Adaptive Survey Design (ASD), demonstrating
how tailored protocols can optimize treatment allocation and improve samples for public opinion
research.


Understanding Participant Burden in CAWI Surveys through Paradata: Insights from the Panel 'Health in Germany'

Mr Tim Kuttig (Robert Koch Institute) - Presenting Author
Mr Johannes Lemcke (Robert Koch Institute)
Mr Stefan Albrecht (Robert Koch Institute)
Mr Matthias Wetzstein (Robert Koch Institute)

Background
The Robert Koch Institute set up a probability-based panel infrastructure focused on public health research (‘Health in Germany’). While participation via paper-based questionnaire is possible, the ubiquity of internet-ready devices and the implemented push-to-web strategy have made online participation (CAWI) predominant. This provides us with valuable data on the participants’ behaviour. To enhance participant retention and mitigate dropout, it is crucial to understand the factors contributing to participant burden. Focusing on the CAWI mode enables us to analyze paradata, which offers insights into response behaviour, data quality, and opportunities for improving questionnaire design. Therefore, in this presentation we analyse response time and dropouts as significant indicators of response burden.

Methodology
We analysed screen times and dropouts as proxies for participant burden from four online questionnaires that were administered to more than 36.000 registered panelists. For each respondent, we recorded screen time per question and the device used, while integrating demographic information (age, sex, education level) and detailed questionnaire metrics (length of question text, number of answer options, complexity of grid questions, among others).

Results
Preliminary findings indicate that age significantly affects response times, with older participants taking longer to respond on average. Additionally, longer question texts and a greater number of answer options correlate with increased screen times, while higher education levels are associated with quicker responses. Notably, device type does not show significant differences in response times. The availability of such detailed information and the longitudinal nature of the panel will also enable us to track changes in participant behaviour over time (e.g. panel conditioning) and analyse more complex associations between questionnaire characteristics and response times in the future. Furthermore, this research improves our ability to more accurately predict the actual burden of participation when designing upcoming questionnaires within the panel.


Testing the impact of a financial incentive for early bird registration for a probability panel

Dr Amelie Van Pottelberge (Universiteit Gent) - Presenting Author
Miss Katrien Vandenbroeck (Katholieke Universtiteit Leuven)
Dr Gert Thielemans (Universiteit Antwerpen)
Dr Bart Meuleman (Katholieke Universiteit Leuven)
Dr John Lievens (Universiteit Gent)

Empirical evidence has demonstrated that cash incentives can be an effective way to increase participation in survey research (e.g. Göritz, 2006; Singer & Ye., 2013). Although unconditional incentives are known to produce the strongest effects, several studies have argued that providing an early bird incentive – that is, an additional incentive that is conditional upon participation before a specific deadline – can further boost response (Friedel et al., 2023; McConagle, Sastry & Freedman, 2023).

This paper discusses the effectiveness of introducing a conditional monetary early bird incentive, in addition to an unconditional monetary incentive, in recruiting panelists for The Social Study. The Social Study is a Belgian mixed-mode probability panel facilitating survey research by offering panelists to option to complete questionnaires online or on paper. The early bird incentive is paid conditionally upon panel registration within 18 days of receiving the initial postal mail invitation. During the first stage of recruitment in 2023 and 2024, a large-scale experiment (N= 6066) is conducted, offering half of the sample units the early bird incentive. The experiment tests the effect of introducing an early bird cash incentive on: response rates, recruitment rates, cost-effectiveness of the recruitment strategy, sample representativeness and panelists’ engagement after recruitment.


Improving Nonresponse Prediction in Online Probability-based Panels: Evaluating Machine Learning Approaches in the context of Varying Engagement Histories

Ms Ziyue Tang (The University of Manchester) - Presenting Author
Dr Alexandru Cernat (The University of Manchester)
Mr Curtis Jessop (National Centre for Social Research (NatCen))
Professor Natalie Shlomo (The University of Manchester)

Predicting nonresponse in complex panel surveys, where respondents join at different stages or participate in various studies over time, presents persistent challenges due to the diverse and dynamic nature of participation patterns. Many probability-based panels regularly recruit new participants, either continuously or periodically, to mitigate biases caused by attrition and ageing. Among them, the NatCen Opinion Panel collects extensive information from both existing participants and newly recruited ones, who are annually recruited through sources such as the British Social Attitudes Survey. Despite the richness of this data, only a small portion is typically leveraged in prediction models, constrained by issues such as missing values, missing variables, and complex participation histories. This study focuses on leveraging machine learning techniques to enhance nonresponse prediction models in panel surveys. Logistic regression is adopted as a baseline model, incorporating multiple variables (e.g., demographic information, socioeconomic characteristics, and response history). Advanced machine learning models, including Random Forests, Gradient Boosting, and Recurrent Neural Networks, are applied to better capture temporal dependencies and response patterns. Furthermore, this study explores the moderating role of panel tenure, investigating how the length of a respondent’s participation moderates the relationship between response propensities and other predictors. Model performance is evaluated using metrics such as Area Under the Curve and prediction error rates to assess accuracy across subgroups. These prediction models are designed to improve data collection by addressing nonresponse challenges and enhancing representativeness. The anticipated findings aim to demonstrate how machine learning can effectively address challenges in nonresponse prediction, providing actionable insights for improving adaptive survey designs in large, complex panel studies (e.g., varying incentives or optimising reminder frequencies). This study seeks to provide a solid foundation for improving the design and implementation of survey research, while advancing the application of machine learning in social surveys.


The Rolling Cross-Section Panel Design: Causal Inference for Expected and Unexpected Events

Mr Cristóbal Moya (DIW Berlin / Bielefeld University) - Presenting Author
Dr Monica Gerber (Universidad Diego Portales)

This article proposes an innovative approach for panel surveys by developing a rolling cross-section panel design (RCSP). The RCSP design randomly assigns participants to complete surveys at different time points, enabling to draw causal inferences on the effect of expected and unexpected events that occur during panel waves. It also contributes to identifying the potential effect of events between waves.

While rolling cross-section designs have been primarily applied in cross-sectional studies, we show how the design can be extended to panel study designs. The approach relies on the randomization of participants within waves, which creates equivalent groups between any time points within waves. Moreover, balancing participants with different profiles optimizes statistical power for potential effects from events within and between waves.

Our article describes the RCPS by explaining its conceptual foundations, illustrating it with a study case on police legitimacy in Chile, and showing its properties with simulations based on different scenarios of sample size, attrition, and auxiliary information.

We conclude that the RCSP design contributes a promising tool for panel surveys that is especially suitable for studies in online modes. It can enhance studies with outcomes potentially interacting with unexpected events and provide a sound method to asses how expected events may influence the study outcomes. We also discuss the implementation challenges of this study design.


Optimizing Probability-Based Panel Recruitment and Mixed-Mode Data Collection in the Netherlands: A Comparative Study of CATI and SMS Approaches

Mr Carsten Broich (Sample Solutions BV) - Presenting Author
Ms Nadica Stankovikj (Sample Solutions BV)
Mr Clark Letterman (Gallup)
Mr Rajesh Srinivasan (Gallup)
Ms Julie Zeplin (Gallup)

This study investigates innovative methodologies for recruiting respondents into a probability-based online panel in the Netherlands, utilizing two distinct recruitment approaches: piggy-backing CATI samples from Random Digit Dialing (RDD) and direct SMS-CATI recruitment. A follow-up survey was conducted to compare response behavior and potential biases across recruitment and interview modalities.

The survey was conducted with four cohorts: (1) 150 respondents recruited via CATI and interviewed via CATI, (2) 150 CATI-recruited respondents interviewed online, (3) 150 respondents recruited via SMS/CATI and interviewed via CATI, and (4) 150 SMS/CATI-recruited respondents interviewed online. The study examines differences in completion rates, mode-specific response patterns, and potential attitudinal biases introduced by recruitment and interview modes.

The findings provide insights into recruitment effectiveness in the Dutch context, revealing variations in completion rates, with online interviews showing higher efficiency but potentially introducing attitudinal biases. Mode bias between CATI and online responses is analyzed, focusing on differences in respondent engagement and data quality. These results contribute to understanding optimal recruitment and interview strategies for probability-based panels in the Netherlands and offer guidance on reducing mode-specific biases to enhance representativeness. The implications for survey methodology and panel management in Europe market will be discussed.


The impact of SMS as a postnotification tool on response rates: evidence from a natural experiment of a Belgian probability panel.

Professor Gert Thielemans (University of Antwerp) - Presenting Author
Professor Amelie Van Pottelberge (Ghent University)

This study examines the effect of adding SMS to email for the second reminder in a
probability-based web panel survey. We use data collected by The Social Study, a Belgian
probability-based panel with over 5,500 panelists. During the first four waves of TSS,
panelists received an invitation email, followed by two email reminders. Starting from the fifth
wave, the second reminder was sent via SMS as well as email. We aim to assess the impact
of this change on response rates, response quality, and nonresponse bias.
Firstly, we will compare response rates. As some panelists (N=1,594) have experienced both
modes of second reminders, we exploit within-respondent variation with fixed effects logistic
models to evaluate the overall likelihood of response based on demographic variables and
reminder type. Additionally, survival analyses on the full panel will be used to compare the
speed of response between the two different modes.
Previous research has shown mixed results regarding the effectiveness of email versus SMS
reminders. For instance, Andreadis (2020) found that SMS reminders can significantly
improve response rates in mobile-friendly surveys. Keding et al. had unclear results for
postnotification. Hansen and Pedersen (2012) reported lower response rates for SMS
compared to email, suggesting that the effectiveness of SMS may depend on the context and
target audience.
Andreadis, I. (2020). Text message (SMS) pre-notifications, invitations and reminders for web
surveys. Survey Methods: Insights from the Field, 1-12.
Keding, A., Brabyn, S., MacPherson, H., Richmond, S. J., & Torgerson, D. J. (2016). Text
message reminders to improve questionnaire response rates. Journal of clinical
epidemiology, 79, 90-95.


The role of survey experience in determining subsequent nonresponse in an online probability panel: a survival analysis

Mrs Katya Kostadintcheva (London School of Economics and Political Science) - Presenting Author
Professor Patrick Sturgis (London School of Economics and Political Science)
Professor Jouni Kuha (London School of Economics and Political Science)

Online probability panels are an increasingly common feature of the modern survey landscape. Their design is based on recruiting a random sample of respondents who agree to complete surveys at regular intervals for small incentives. However, compared to interviewer-based survey panels they are characterised by considerably higher rates of nonresponse at each panel wave, in addition to the already low initial recruitment rates, thus making the cross-sectional response rate for this type of design very low. Given their increasing prevalence, it is essential that we understand better the factors that lead online panel respondents to decline survey invitations. In this paper we examine how different measures of survey experience affect subsequent nonresponse to survey invitations.
This research uses data from the Verian Public Voice, a commercially operated online probability panel, which is used for repeated cross-sectional studies. We employ a discrete-time survival analysis where the outcome is respondents’ first nonresponse to a survey invitation, following an earlier survey completion. This approach accommodates the unbalanced data structure typical of such panels, where some panel members receive more frequent survey invitations than others based on their response propensity.
We find that several aspects of the survey experience influence respondents’ propensity to respond to the next survey invitation. These include the extent to which respondents report enjoying the survey, the number of days since the last invitation, the average survey duration and the individual survey length.


Balancing Costs and Errors in Probability-Based Survey Recruitment: Lessons from a Web Panel Experiment

Ms Andreja Praček (University of Ljubljana) - Presenting Author
Dr Gregor Čehovin (University of Ljubljana)
Dr Vasja Vehovar (University of Ljubljana)

Traditional evaluations of survey recruitment strategies often focus on a single dimension, such as cost minimisation, bias reduction, or maximising response rates. This study adopts a more holistic approach, integrating survey costs and errors to determine the optimal recruitment strategy. The 2024 experiment, conducted within the 1KA probability-based web panel, involved 7,000 participants randomly assigned to eight incentive groups. Incentives included variations of €5/€10 gift cards, conditional, unconditional, and combined offers, with response rates ranging from 16% to 50%. To assess performance, we calculated the costs per unit of accuracy (CUA), which combines mean squared error (MSE) and costs. The MSE incorporates both variance and squared bias, assessed by comparing survey estimates to official data from statistical office and national databases. Findings from 172 variables show that the most effective group varied depending on the estimate topic. Across all variables, however, the €5 unconditional gift card emerged as the most efficient incentive, providing the best balance between cost and accuracy.


Analyzing the causal effect of survey burden on nonresponse in probability-based online panels among new panel respondents

Professor Arie Kapteyn (Center for Economic and Social Research, University of Southern California ) - Presenting Author
Mr Htay-Wah Saw (Michigan Program in Survey and Data Science, University of Michigan-Ann Arbor )
Professor Marco Angrisani (Center for Economic and Social Research, University of Southern California )

We present the results of a randomized controlled trial (RCT) that evaluated the causal effect of survey burden on nonresponse in a probability-based online panel. The experiment was implemented within the Understanding America Study (UAS), a probability-based online panel representative of the U.S. adult population. We recruited 2,000 new participants for this experiment and randomly assigned half of the participants to a low survey burden condition (n=1,000) and the remaining half to a high survey burden condition (n=1,000). In the low burden condition, participants received one survey invitation every four weeks, whereas in the high burden condition, participants received one survey invitation every two weeks. The only difference between the two conditions is the frequency of survey invitations, with other design features such as survey topics and questionnaire length remaining the same in both conditions. The RCT began in February 2024 and will continue until the end of December 2024. We found that new panelists on a more frequent survey schedule had higher response rates than those on a less frequent schedule. Subgroup analyses revealed the largest treatment effects among high-education and high-income groups, and respondents who are currently working, with no effects found among low-education and low-income groups, and respondents who are not currently working. Our findings suggested the treatment effects were mainly driven by engagement rather than incentive effects linked to receiving a more frequent survey schedule. We discuss the theoretical and practical implications of our findings for improving panel management and retention practices in future studies.


Assessing Bias in Survey Estimates: Comparing Probability-Based and Nonprobability Web Panels to Traditional Probability-Based Surveys

Dr Gregor Čehovin (Faculty of Social Sciences, University of Ljubljana) - Presenting Author
Professor Vasja Vehovar (Faculty of Social Sciences, University of Ljubljana)
Ms Andreja Praček (Faculty of Social Sciences, University of Ljubljana)
Mr Luka Štrlekar (Faculty of Social Sciences, University of Ljubljana)
Ms Andrea Ivanovska (Faculty of Social Sciences, University of Ljubljana)

The comparative quality of survey estimates derived from traditional probability-based surveys (TPS) and various web panels remains a substantial methodological concern. This study evaluated potential biases in survey estimates across three modes: TPS, probability-based web panels (PWP), and nonprobability web panels (NWP). The analysis encompassed 700 question items from 10 concurrent surveys conducted in Slovenia, using official statistics from TPS as benchmarks. Relative bias (RB), defined as the difference between panel estimates and external population values, served as the primary metric for evaluating data quality.

The results revealed substantial differences in relative bias (RB) across survey modes and topics. In PWP, 30% of estimates exhibited an RB exceeding 10%, while this increased to 40% in NWP. Topic-specific analysis indicated particularly high bias in measurements related to income and living conditions, with 56% of PWP and 70% of NWP estimates exceeding 10% RB. General opinion items showed comparatively lower bias, with 27% and 35% of estimates exceeding 10% RB in PWP and NWP, respectively.

A consistent pattern emerged where PWP and NWP respondents reported progressively lower levels of happiness, trust, religiosity, economic optimism, and EU support compared to TPS respondents. Systematic differences were particularly evident in responses regarding sensitive social issues, such as immigrant acceptance and perceived climate change responsibility, which were lower across this progression. Conversely, PWP and NWP respondents progressively reported higher levels of political engagement, long-term health issues, and political criticism compared to TPS respondents.

The findings demonstrate that while both web panel types exhibit substantial deviations from TPS benchmarks, the differences between PWP and NWP are relatively modest. These results provide insights into the trade-offs researchers face when choosing between the costly but more accurate TPS and the more cost-effective web panels, particularly when considering specific research topics and resource constraints.