ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Recruiting and Maintaining Probability-Based Online Panel Surveys 3

Session Organisers Mr Ulrich Krieger (SFB 884, University of Mannheim)
Ms Sabine Friedel (SFB 884, University of Mannheim)
Ms Ines Schaurer (GESIS - Leibniz Institute for the social sciences)
TimeThursday 18th July, 16:00 - 17:30
Room D21

In recent years, there has been an increasing number of online studies based on a probability sample. In addition, traditional large scale panel surveys have increasingly gone online (e.g. UKHLS, ESS).
Recruiting and maintaining such online panel surveys creates its unique challenges.

Probability-based online panels aim to combine the best of the offline and online survey world. Because of the lack of a sampling frame, there are many ways of offline recruitment that have their own challenges. Furthermore, the online survey needs to be constantly adjusted to the rapidly changing web and device infrastructure. As most online panel surveys have short wave spans, contact procedures to respondents need to be adjusted accordingly.

In this session we invite papers presenting solutions and discussing challenges when recruiting, refreshing or maintaining probability-based online panel surveys. This includes topics such as:
Sampling and contacting respondents for online participation.
Pushing respondents to web participation in online panel studies.
Improving sign-up rates to online panel studies.
Strategies for refreshing the samples of running online panel studies.
Techniques and interventions to prevent panel attrition.
Methods to improve the usability of the online surveys, especially for mobile devices.
In addition, all other topics relevant to the session theme are welcome.






Keywords: probability-based online survey, panel maintenance, panel recruitment

Good Will Hunting: Predicting Response Quality Using Motivation in Longitudinal Surveys

Mr Valentin Brunel (Sciences Po) - Presenting Author
Mr Jean-Baptiste Portelli (Sciences Po)

Observers of survey methodology have pointed out declining response rates and decreasing quality of survey as perhaps the most important challenges facing data collection. In order to better understand this phenomenon, we wish to study the impact of respondents’ motivation on response rates and response quality.

This study focuses on two longitudinal annual studies collected by the ELIPSS Panel in France from 2013 to 2018. This probability-based longitudinal panel of roughly 3000 respondents collects monthly answers to social-science surveys. “ELIPSS Annual survey” collects socio-demographic information. Estimations of data quality and its evolution through time are made using data from Annual surveys, “ELIPSS Digital Practices” studies, response rates in all other surveys and duration of response.

Our research question is : how does motivation influence chances of staying longer in the panel and giving quality answers ? Initial motivation is captured in a series of indicators collected during recruitment, both open-ended questions and grid questions with limited choices. Ongoing motivation is captured throughout Annual surveys by short questions on the study itself and respondent’s appreciation and dedication.

These indicators are used through different methods like statistic scores in order to predict participation (chances of leaving the panel) and answer quality (not knowing, refusal or not answering, duration of response). Two main hypotheses will be tested : that this score of motivation in the panel is correlated with higher participation and better response quality, that creating multiple categories of motivation can help notice differential participation rates and response quality.

This communication will present results of our models in predicting the influence of motivation on survival in the panel and quality of survey answers. We hope it will contribute to efforts aiming at maximising response rates and data quality. For longitudinal survey, asking about initial motivation when entering the panel could thus become a tool for panel management.


Examination of Nonresponse Follow-up Impact on AmeriSpeak Panel Data Quality

Dr Ipek Bilgen (NORC at the University of Chicago) - Presenting Author
Dr Michael Dennis (NORC at the University of Chicago)
Dr Nadarajasundaram Ganesh (NORC at the University of Chicago)

Download presentation

The opinion research industry has been criticized for not accurately and representatively capturing the general public’s views. The potential for inaccurate polls is predictable in light of how some segments of society—such as low-income households, social and political conservatives, youth, rural households and less-educated people—can be harder to reach and consequently can be undercounted in surveys. In this study, we examine the impact of NORC’s AmeriSpeak Panel’s face-to-face nonresponse follow-up recruitment efforts in assuring appropriate representation of undercounted segments of the U.S. population. AmeriSpeak is NORC’s probability-based panel using address-based probability sampling and multiple modes of recruitment, including mail, telephone, and in-person (face-to-face). AmeriSpeak employs a two-phase recruitment method with the goal of improving demographic representation of the panel sample, a higher response rate, and decreasing nonresponse bias over a one-phase recruitment approach to help ensure accuracy in survey results. Accordingly, the research question is whether and in what ways is a non-response follow-up program an effective technique to correct for non-response bias in recruiting households for a probability panel. This study investigates this research question by examining a variety of attitudinal and behavioral survey outcomes from five recent AmeriSpeak case studies that target different populations. Based on the study findings, the second-stage face-to-face recruitment boosts the panel response rate and improves representation specifically among groups who are traditionally most reluctant to respond to surveys. Additionally, the representation of persons with moderate-to-conservative opinions was increased, leading to different, more inclusive key outcome measures on public opinion.


Using Response Patterns in a Panel Survey to Explain Panel Dropout

Ms Isabella Minderop (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Bernd Weiß (GESIS - Leibniz Institute for the Social Sciences)

Download presentation

Keeping respondents who are likely to drop out of a panel in the sample is a major task for panel data infrastructures. This is especially important when respondents at risk of dropping out are notably different to other respondents. Hence, it is key to identify those respondents and prevent them from dropping out. Response behavior in previous waves, e.g. response or nonresponse, has been shown to be a good predictor of next wave's response. Studying patterns of time-to-response and nonresponse over multiple panel waves can be a promising approach to identify participants with a high likelihood of leaving the panel.

To date, response time has mostly been studied in a cross-sectional context. However, longitudinal surveys offer the opportunity to investigate response patterns, especially focusing on stable and unstable behavior. That is, switching inconsistently between being an early, late or nonrespondent. Relying on assumptions of the frame-selection approach, it is derived that respondents without a stable response pattern are more likely to drop out of a panel survey. Also, respondents who tend to reply later than others should be more likely to drop out. In this study, we start by investigating and identifying response patterns. These response patterns are then applied to explain panel dropout.

Our study relies on data from the GESIS Panel which is a bi-monthly, probability-based mixed-mode access panel (n = 5,000) based in Germany. The GESIS Panel includes data collected in web and mail mode. First results indicate that respondents who answer early on a regular basis show low risk of dropping out.


BeHeardPhilly, a US Municipal Panel of Philadelphians: An Innovative Approach to Multi-Mode Surveys.

Dr Heidi Grunwald (Temple University Institute for Survey Research) - Presenting Author

Surveys in general are increasingly alienating customers and citizens due to the burden on time and the intrusion on our already technology-laden lives. To combat the indifference to surveys and to try to move the needle on response rates for hard-to-reach populations, Temple’s Institute for Survey Research has spent two and half years designing and launching a local panel called BeHeardPhilly. Nine thousand, four hundred members strong, the panel includes probability-based and opt-in members. Temple’s Institute for Survey Research has developed a custom-built panel management software with a keen eye towards multi-modal recruitment and survey deployment in an urban setting. We have launched 90+ surveys over the course of two and a half years using different demographic and geo-catchment samples. We propose to study the use of response propensity modeling (RPM) to identify a multivariate statistical model to predict the likelihood that a panel member will respond on the first attempt of a survey (Lavrakas, Jackson, McPhee, 2018). We will use a three-stage process to identify, test and refine the propensity model. The first stage will include the building of the multivariate model to include socio-demographic and behavioral traits that we have collected on at least 1200 Philadelphians. We will maintain several hold-out samples to test the sensitivity of the multivariate model. In the second stage we will identify the hardest-to-respond panel members and deploy 1) an experiment that varies the invite to be more culturally relevant and 2) an incentive experiment that varies incentives to hard-to-respond groups. We will use results from these two experiments to refine the Response Probability (RP) score and multivariate model for future BeHeardPhilly survey deployments.