ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

The use of differential incentives in surveys

Session Organisers Dr Alessandra Gaia (University College London)
Ms Gerry Nicolaas (NatCen)
TimeTuesday 18 July, 09:00 - 10:30
Room

There is increasing interest in the use of differential incentives – i.e. offering varying levels or types of incentive to different sample members – to attempt to improve survey response rates, minimise non-response bias, and increase cost-effectiveness. For example, higher value incentives may be targeted (i.e. “targeted incentives”) to specific population subgroups which are typically under-represented or of particular research interest (e.g. low-income respondents) or which are expected to be more responsive to incentives (e.g. in longitudinal studies, prior wave non-respondents). Researchers may also offer higher value conditional monetary incentives for survey participation within a certain timeframe (i.e. “early-bird” incentives) or at the end of fieldwork, as a tool to convert unproductive cases. Interviewers may also be instructed to use incentives “discretionally”, where other persuasion strategies are not effective.
While the promise of differential incentives is to efficiently allocate the survey budget and improve data quality, several aspects remain unresolved, particularly concerning their effectiveness, logistical aspects and ethical issues. These include, for example, considerations on the fairness of offering incentives of different amounts/type for the completion of the same task, presence/absence of a requirement to disclose the differential design to participants, and the risk of identifying/stigmatising sample members as belonging to hard-to-reach groups.
We welcome empirical and theoretical submissions on:
- The effect of differential incentives on response rates, representativeness, item non-response, measurement and other aspects of data quality;
- Applications of differential incentives in cross-sectional studies, where limited information on sample members is available for targeting;
- Costs-effectiveness;
- Ethical issues and considerations around acceptability for sample members, interviewers, funders and ethical bodies on the use of differential incentives;
- Operational and technological barriers in the implementation of differential incentives, especially in mixed-modes contexts.

Keywords: Differential incentives, targeted incentives, representativity, inclusiveness, non-response bias, push to web, longitudinal studies, non-response.

Papers

Targeting incentives based on geodemographics and participant behaviour: Evidence from a push-to-web survey in the UK

Mr Curtis Jessop (The National Centre for Social Research) - Presenting Author

Higher incentives have been shown to increase survey response rates and potentially improve the representativeness of the participating sample. Where both the marginal impact on response rates and the costs of issuing a larger sample (e.g. due to high postage costs) are high, it can also be relatively cost-effective. However, universally increasing incentives may not be the most efficient use of resources. Instead, targeting higher incentives at those unlikely to participate (and not those willing to participate for a lower incentive) may offer greater value for money. However, in the UK, where the sample is drawn from the Postcode Address File, relatively little is known about the sample, so effective targeting can be difficult.

In 2023 and 2024, the British Social Attitudes study (BSA) tested the impact on participation of offering a conditional £10 or £15 incentive. In-line with previous evidence, the higher incentive resulted in a higher response rate. This paper will report on further analysis, exploring the impact of different incentives in different geographies. It focuses on the impact on areas with lower response rates (e.g. areas with higher levels of deprivation) to provide evidence on how an incentive strategy targeted based on geography may impact overall response rates, sample quality, and cost-effectiveness.

An alternative approach to targeting incentives to an address-based sample is using participant behaviour. In 2024 an additional arm of the BSA experiment initially offered all sampled participants a £10 incentive, and then increased this to £15 if they had not participated after the initial invitation and reminder. We explore the impact of this relative to offering a £10 or £15 incentive on response rates, sample quality, and cost-effectiveness.

These findings will provide evidence on how targeting incentives may improve quality and efficiency in push-to-web surveys


Differential incentives in a multi-mode feasibility study for a new UK birth cohort

Dr Erica Wong (Centre for Longitudinal Studies, University College London) - Presenting Author
Professor Lisa Calderwood (Centre for Longitudinal Studies, University College London)
Professor Alissa Goodman (Centre for Longitudinal Studies, University College London)
Professor Pasco Fearon (University College London and University of Cambridge)
Dr Alyce Raybould (Centre for Longitudinal Studies, University College London)
Ms Karen Dennison (Centre for Longitudinal Studies, University College London)
Dr Charlotte Booth (Centre for Longitudinal Studies, University College London)

Incentives are often used to boost response rates; differential incentives are less common in the UK than in the US but are increasingly being used to strategically and cost-efficiently target harder-to-reach groups. However, evidence is mixed as to whether incentives increase participation of those typically underrepresented in surveys, such as lower SES families, young people, and ethnic minority groups.
In this paper, we present findings on an incentive experiment conducted in the Early Life Cohort Feasibility Study (ELC-FS) which tested a combination of unconditional and conditional incentives. The ELC-FS collected information about several thousand babies aged between 8-12 months old and their families in 2023-2024 to test the feasibility of conducting a new UK-wide birth cohort study. It is funded by Economic and Social Research Council and led by the Centre for Longitudinal Studies at University College London with interviewing carried out by Ipsos.
Parent interviews of 30-60 minutes in length were carried out using web, phone and video-interviewing as well as face-to-face, depending on the informant type, and mothers and fathers (including those living in their own households) were recruited directly and separately. The survey included a self-completion (CASI) section, data linkage requests, and for a subsample of families, saliva collection. The main survey was followed up by a non-response web survey.
The experiment tested six different incentive conditions, comprising of two different kinds of unconditional incentives (a £5 cash note or baby bib or no incentive) and two levels of conditional incentive (£10 or £20 voucher). An additional incentive was given to those providing a saliva sample, and targeted differential incentives were offered in Scotland and Northern Ireland for completing the non-response survey. We assess the effectiveness of these incentive strategies across countries,


The Effects of Increasing the Incentives During Fieldwork

Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories) - Presenting Author

This work examines the effects of increasing the incentives during fieldwork in a cross-sectional survey. Initially, a 10 Euro postpaid incentive was offered, but due to too low response rates in the first weeks of data collection, the incentive was increased to 50 Euros. Additionally, for the group receiving the increased incentive, a randomized assignment was implemented, where half of the sample received an additional 5 Euro prepaid incentive.
The results of our logistic regression analysis show that the increased incentive is associated with lower participation probabilities. This outcome seems counter‐intuitive at first but can be explained by selection effects, as the higher incentive was introduced because of low response rates. Target persons who were motivated to participate have already participated, when the higher incentive was introduced. Thus, the observed correlation cannot be interpreted as a causal effect.
We do have a true experimental design for the two groups, receiving either 50 Euro postpaid or 50 Euro postpaid plus a 5 Euro prepaid incentive. For these, we found that the group receiving 50 Euros postpaid plus 5 Euros prepaid had even lower participation probabilities compared to the group receiving 50 Euros postpaid, although insignificant. That suggests, in the context of posterior increased incentives, the prepaid component is not decisive for participation or may even have a negative effect. This finding does not contradict the widely proven positive effect of unconditional prepaid incentives, as the prepaid incentive in our study was not unconditional, only available to those who initially refused to cooperate. These results suggest the recommendation to offer a "truly" unconditional prepaid incentive already in the initial invitation, as this approach may be more effective in boosting participation. They also suggest not to apply any changes in the incentive scheme during fieldwork.


The use of differential incentives in surveys

Dr Alessandra Gaia (Centre for Longitudinal Studies, UCL Social Research Institute, University College London) - Presenting Author
Mr Matt Brown (Centre for Longitudinal Studies, UCL Social Research Institute, University College London)
Professor Lisa Calderwood (Centre for Longitudinal Studies, UCL Social Research Institute, University College London)
Mrs Gerry Nicolaas (NatCen Social Research)
Mr Curtis Jessop (NatCen Social Research)

Targeting higher value incentives at specific population subgroups is a potentially promising approach for improving response rates and efficiently allocating survey budgets and as surveys increasingly rely on the web, with no interviewers to encourage participation, the strategic use of targeted incentives could prove even more useful. Targeted incentives could also be effectively at boosting participation amongst under-represented and harder to reach groups, increasing inclusivity and reducing non-response bias.

Despite their potential, the application of targeted incentives is not well documented and evidence on their effectiveness is limited. Most experimental evidence comes from the US and is often limited to specific cohorts or timeframes. Additionally, most experimental research compares groups of participants approached using a larger incentive budget with a group approached with a less expensive budget and so cannot identify how best to allocate fixed resources. Significant gaps in our knowledge, along with practical and ethical challenges, still need to be addressed.

In this paper we first present initial results from a literature and evidence review on the use of targeted incentives in social surveys, focusing on their application, effectiveness and ethical and logistical barriers to their implementation. Second, we describe the design of an experiment investigating the effectiveness of targeting higher incentives at residents in more deprived areas in Britain. The experiment is implemented in the 2025 British Social Attitudes survey, a mixed-mode push-to-web large-scale repeated cross-sectional general population survey. Finally, we will discuss our plans for producing a code of ethics aimed at understanding alignment with existing ethical standards, supporting cost-benefits evaluations (e.g., equality versus equity), identifying aspects to be brought to the attention of ethical bodies, funders, study participants, and providing practitioners and study's Principal Investigators with a “toolkit” for the ethical implementation of targeted incentives.


Is it the Thought That Counts? Comparing Monetary and Non-Monetary Survey Incentives

Mr Cameron Raynor (RA2) - Presenting Author
Ms Jessica Weber (RA2)

Survey researchers have long recognized the importance of incentives in improving participation rates and reducing nonresponse bias. However, there is limited research on how specific types and amounts of incentives influence response rates across diverse demographic and behavioral populations. This study explores the effectiveness of varying incentive types and amounts, with a focus on optimizing survey completion rates and data quality.
In this collaborative study, we conducted an experiment to evaluate how different incentive structures—ranging from gift cards of varying amounts and types to altruistic donation options—affect survey participation. Using purpose-built digital advertising recruitment strategies, we directed potential respondents to landing pages where different incentives were offered. Variations in the survey description (e.g., topic, length) were also tested to assess their interaction with incentive type. This approach enabled us to assess participation rates, data quality, and potential biases across subgroups defined by age, gender, income, education, and geographic location. A control group with no incentives was included to serve as a baseline comparison.
The findings provide actionable insights into how different populations respond to specific incentives, advancing the understanding of respondent motivations in survey recruitment. By analyzing completion rates, engagement patterns, and data quality markers, this study offers valuable guidance on tailoring incentive strategies to optimize survey recruitment and ensure representative samples. These results contribute to improving participation rates while maintaining high-quality data in survey research.


Methods to Maximize the Panel Consent Rate in the Recruitment Wave of a New Web Panel

Dr Sebastian Hülle (Institute for Employment Research (IAB)) - Presenting Author
Mrs Luisa Hammer (Institute for Employment Research (IAB))
Mrs Katia Gallegos Torres (Institute for Employment Research (IAB))
Professor Yuliya Kosyakova (Institute for Employment Research (IAB))
Mr Lukas Olbrich (Institute for Employment Research (IAB))
Professor Joseph Sakshaug (Institute for Employment Research (IAB))

Panel consent is the permission given by respondents to be re-contacted for future panel waves. It is the legal requirement for any participation in subsequent waves. The lower the panel consent rate, the larger the initial panel attrition and the higher the risk of panel consent bias, which threatens data quality. Panel consent is usually asked for only once, at the very end of the questionnaire in the first wave. Despite its importance, research on maximizing panel consent rates is very scarce.

In this study, we conducted an experiment to increase panel consent rates in the first wave of the new online panel survey "International Mobility Panel of Migrants in Germany" (IMPa). We focused on two innovative design features:
(1) A repeated request within the questionnaire: Respondents who do not provide consent at the first request are immediately followed up with a second request to reconsider their decision to provide consent at this second attempt.
(2) Incentivizing panel consent: Depending on the experimental group, the respondent is offered an additional incentive (€5) for providing panel consent.

The experiment consists of three randomized groups of equal size with different panel consent request designs.
In group 1, respondents receive – if applicable – a second request but no additional incentives for panel consent are offered. In group 2, respondents are offered an additional incentive for providing panel consent at both requests.
In group 3, respondents are offered an additional incentive only if they did not provide consent at the first request. We investigate the effects of a repeated request and of incentivizing panel consent on the panel consent rate. Furthermore, we analyze the costs associated with each design and derive recommendations for panel consent request designs.


Targeted incentives and the National Diabetes Experience Survey: Engaging seldom heard groups living with diabetes

Ms Eileen Irvin (Ipsos) - Presenting Author

The 2024 National Diabetes Experience Survey (NDES) offered a unique opportunity to examine the impact of conditional incentives on survey response rates and data quality. This presentation focuses on the incentive experiment conducted as part of the survey.

The NDES, commissioned by NHS England and administered by Ipsos, gathered data on the experiences of adults (18+) living with type 1 or type 2 diabetes in England for at least 12 months. The survey employed a push-to-web design, incorporating postal mailouts, SMS reminders, online completion, and paper questionnaires to maximize participation. Data was collected between March 18th and July 16th, 2024, and reported at both national and Integrated Care System (ICS) levels.

To investigate the effectiveness of targeted incentives, the NDES offered conditional £5 vouchers to subgroups less likely to respond: younger people, those in deprived areas (based on the Index of Multiple Deprivation - IMD), and those from ethnic minority groups. This experiment aimed to assess the influence of these incentives on response rates and data quality across different demographic groups.

Analysis of the incentive experiment will explore its impact on overall response rates, non-response bias, and potential differences in responses between incentivised and non-incentivised participants. Findings will inform future survey strategies, particularly for engaging seldom heard populations in health research, and contribute to the broader discussion on the ethical and practical considerations of differential incentives in survey research.