ESRA 2025 Preliminary Program
All time references are in CEST
The use of differential incentives in surveys 2 |
Session Organisers |
Dr Alessandra Gaia (University College London) Ms Gerry Nicolaas (NatCen)
|
Time | Thursday 17 July, 15:30 - 17:00 |
Room |
Ruppert Wit - 0.52 |
There is increasing interest in the use of differential incentives – i.e. offering varying levels or types of incentive to different sample members – to attempt to improve survey response rates, minimise non-response bias, and increase cost-effectiveness. For example, higher value incentives may be targeted (i.e. “targeted incentives”) to specific population subgroups which are typically under-represented or of particular research interest (e.g. low-income respondents) or which are expected to be more responsive to incentives (e.g. in longitudinal studies, prior wave non-respondents). Researchers may also offer higher value conditional monetary incentives for survey participation within a certain timeframe (i.e. “early-bird” incentives) or at the end of fieldwork, as a tool to convert unproductive cases. Interviewers may also be instructed to use incentives “discretionally”, where other persuasion strategies are not effective.
While the promise of differential incentives is to efficiently allocate the survey budget and improve data quality, several aspects remain unresolved, particularly concerning their effectiveness, logistical aspects and ethical issues. These include, for example, considerations on the fairness of offering incentives of different amounts/type for the completion of the same task, presence/absence of a requirement to disclose the differential design to participants, and the risk of identifying/stigmatising sample members as belonging to hard-to-reach groups.
We welcome empirical and theoretical submissions on:
- The effect of differential incentives on response rates, representativeness, item non-response, measurement and other aspects of data quality;
- Applications of differential incentives in cross-sectional studies, where limited information on sample members is available for targeting;
- Costs-effectiveness;
- Ethical issues and considerations around acceptability for sample members, interviewers, funders and ethical bodies on the use of differential incentives;
- Operational and technological barriers in the implementation of differential incentives, especially in mixed-modes contexts.
Keywords: Differential incentives, targeted incentives, representativity, inclusiveness, non-response bias, push to web, longitudinal studies, non-response.
Papers
The Effects of Increasing the Incentives During Fieldwork
Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories) - Presenting Author
This work examines the effects of increasing the incentives during fieldwork in a cross-sectional survey. Initially, a 10 Euro postpaid incentive was offered, but due to too low response rates in the first weeks of data collection, the incentive was increased to 50 Euros. Additionally, for the group receiving the increased incentive, a randomized assignment was implemented, where half of the sample received an additional 5 Euro prepaid incentive.
The results of our logistic regression analysis show that the increased incentive is associated with lower participation probabilities. This outcome seems counter‐intuitive at first but can be explained by selection effects, as the higher incentive was introduced because of low response rates. Target persons who were motivated to participate have already participated, when the higher incentive was introduced. Thus, the observed correlation cannot be interpreted as a causal effect.
We do have a true experimental design for the two groups, receiving either 50 Euro postpaid or 50 Euro postpaid plus a 5 Euro prepaid incentive. For these, we found that the group receiving 50 Euros postpaid plus 5 Euros prepaid had even lower participation probabilities compared to the group receiving 50 Euros postpaid, although insignificant. That suggests, in the context of posterior increased incentives, the prepaid component is not decisive for participation or may even have a negative effect. This finding does not contradict the widely proven positive effect of unconditional prepaid incentives, as the prepaid incentive in our study was not unconditional, only available to those who initially refused to cooperate. These results suggest the recommendation to offer a "truly" unconditional prepaid incentive already in the initial invitation, as this approach may be more effective in boosting participation. They also suggest not to apply any changes in the incentive scheme during fieldwork.
Is it the Thought That Counts? Comparing Monetary and Non-Monetary Survey Incentives
Mr Cameron Raynor (RA2) - Presenting Author
Ms Jessica Weber (RA2)
Survey researchers have long recognized the importance of incentives in improving participation rates and reducing nonresponse bias. However, there is limited research on how specific types and amounts of incentives influence response rates across diverse demographic and behavioral populations. This study explores the effectiveness of varying incentive types and amounts, with a focus on optimizing survey completion rates and data quality.
In this collaborative study, we conducted an experiment to evaluate how different incentive structures—ranging from gift cards of varying amounts and types to altruistic donation options—affect survey participation. Using purpose-built digital advertising recruitment strategies, we directed potential respondents to landing pages where different incentives were offered. Variations in the survey description (e.g., topic, length) were also tested to assess their interaction with incentive type. This approach enabled us to assess participation rates, data quality, and potential biases across subgroups defined by age, gender, income, education, and geographic location. A control group with no incentives was included to serve as a baseline comparison.
The findings provide actionable insights into how different populations respond to specific incentives, advancing the understanding of respondent motivations in survey recruitment. By analyzing completion rates, engagement patterns, and data quality markers, this study offers valuable guidance on tailoring incentive strategies to optimize survey recruitment and ensure representative samples. These results contribute to improving participation rates while maintaining high-quality data in survey research.
Methods to Maximize the Panel Consent Rate in the Recruitment Wave of a New Web Panel
Dr Sebastian Hülle (Institute for Employment Research (IAB)) - Presenting Author
Mrs Luisa Hammer (Institute for Employment Research (IAB))
Mrs Katia Gallegos Torres (Institute for Employment Research (IAB))
Professor Yuliya Kosyakova (Institute for Employment Research (IAB))
Mr Lukas Olbrich (Institute for Employment Research (IAB))
Professor Joseph Sakshaug (Institute for Employment Research (IAB))
Panel consent is the permission given by respondents to be re-contacted for future panel waves. It is the legal requirement for any participation in subsequent waves. The lower the panel consent rate, the larger the initial panel attrition and the higher the risk of panel consent bias, which threatens data quality. Panel consent is usually asked for only once, at the very end of the questionnaire in the first wave. Despite its importance, research on maximizing panel consent rates is very scarce.
In this study, we conducted an experiment to increase panel consent rates in the first wave of the new online panel survey "International Mobility Panel of Migrants in Germany" (IMPa). We focused on two innovative design features:
(1) A repeated request within the questionnaire: Respondents who do not provide consent at the first request are immediately followed up with a second request to reconsider their decision to provide consent at this second attempt.
(2) Incentivizing panel consent: Depending on the experimental group, the respondent is offered an additional incentive (€5) for providing panel consent.
The experiment consists of three randomized groups of equal size with different panel consent request designs.
In group 1, respondents receive – if applicable – a second request but no additional incentives for panel consent are offered. In group 2, respondents are offered an additional incentive for providing panel consent at both requests.
In group 3, respondents are offered an additional incentive only if they did not provide consent at the first request. We investigate the effects of a repeated request and of incentivizing panel consent on the panel consent rate. Furthermore, we analyze the costs associated with each design and derive recommendations for panel consent request designs.
The effect of questionnaire duration and incentive on survey engagement, response differentiation and open-ended answer quality: findings from the 2024 European Working Conditions Survey
Mrs Tanja Kimova (Verian) - Presenting Author
Mr Gijs van Houten (Eurofound)
Mr Christopher White (Eurofound)
Miss Hajar GAD (Verian)
Mr Jamie Burnett (Verian)
The European Foundations for the Improvement of Living and Working Conditions (Eurofound) entrusted Verian with conducting the eighth edition of the European Working Conditions Survey (EWCS) in Spring 2024. To future-proof its surveys, Eurofound conducted the EWCS 2024 as a parallel run study, combining face-to-face and online data collection in all 29 European countries, using telephone and postal push-to-web recruitment methods. The online component featured various innovative test elements to refine and enhance survey methodologies.
Data quality is a multifaceted concept without a single, definitive metric. To address this, we propose three broad categories to assess data quality: survey engagement, response differentiation, and open-ended answer quality.
We measure survey engagement as the effort and dedication respondents invest in thoroughly answering all survey questions. It is measured using three indicators: interview duration, item non-response, and uneven interview pace (quantified through stop-start patterns).
Response differentiation is evaluated using non-differentiation indices, including the maximum string of identical responses within a question block, the proportion of identical consecutive responses, and Mulligan’s score.
Open-ended answer quality is more challenging to quantify, particularly in multi-language surveys. Our approach considers the number of words provided and the proportion of nonsensical responses, defined as verbatims that cannot be coded into NACE/ISCO classifications.
This paper examines these data quality measures in relation to questionnaire length and incentive levels, with face-to-face data serving as a benchmark for quality.