Responsive and Adaptive Design (RAD) for Survey Optimization 2 |
|
Session Organisers |
Dr Asaph Young Chun (Statistics Research Institute, Statistics Korea) Dr Barry Schouten (Statistics Netherlands) |
Time | Wednesday 17th July, 16:30 - 17:30 |
Room | D04 |
This session is devoted to discussing an evidence-based approach to guiding real-time design decisions during the course of survey data collection. We call it responsive and adaptive design (RAD), a scientific framework driven by cost-quality tradeoff analysis and optimization that enables the most efficient production of high-quality data. The notion of RAD is not new; nor is it a silver bullet to resolve all the difficulties of complex survey design and challenges. RAD embraces precedents and variants of responsive design or adaptive design that survey designers and researchers have practiced over decades (e.g., Groves and Heeringa 2006; Wagner 2008). In this session, we present the papers that discuss any of the four pillars of RAD: survey process data and auxiliary information, design features and interventions, explicit quality and cost metrics, and a quality-cost optimization tailored to survey strata. The papers will discuss how these building blocks of RAD are addressed and integrated like those papers published in the 2017 JOS special issue on RAD and the 2018 JOS special section on RAD (Edited by Chun, Schouten and Wagner). We are fond of RAD ideas implemented for survey-assisted population modeling, rigorous optimization strategies, and total survey cost-error modeling.
This session will present RAD papers, involving applied or theoretical contributions. For instance, 1) what approaches can be used to guide the development of cost and quality metrics in RAD and their use over the survey life cycle? 2) which methods of RAD are able to identify phase boundaries or stopping rules that optimize responsive designs? 3) what would be best practices for applying RAD to produce high quality data in a cost-effective manner? and 4) under what conditions can administrative records or big data be adaptively used to supplement survey data collection and improve data quality?
Keywords: Responsive design, adaptive design, survey optimization, tradeoff anaysis, total survey error
Professor Barry Schouten (Statistics Netherlands and Utrecht University)
Miss Shiya Wu (Utrecht University) - Presenting Author
Dr Joep Burger (Statistics Netherlands)
Dr Ralph Meijers (Statistics Netherlands)
Adaptive survey design leans heavily on accurate estimates of survey design parameters such as contact propensities, participation propensities and costs per sample unit. Biased estimates of such parameters may lead to suboptimal decisions. Furthermore, imprecision of estimates must be acknowledged in order to avoid overly optimistic yield and/or costs. Bayesian adaptive survey design can be a powerful tool to account for uncertainty and to include knowledge from historic survey data and data collection staff involved in monitoring surveys.
Bayesian adaptive survey design implies that prior to data collection, distributions for survey design parameters are elicited and incorporated in the optimization model. During data collection and in between rounds of the survey, the priors are updated, gradually allowing for stronger decisions.
A first natural question is how to elicit prior distributions. While this may be relatively straightforward for a repeated cross-sectional survey, it is not at all clear for new surveys or surveys conducted at a low frequency in time. In the paper, we discuss prior elicitation in both settings. We focus, however, on the setting where a survey is new. We propose to score historic surveys based on their similarity to the new surveys on a range of survey metadata and to include the scores through power priors. We illustrate the methodology in two case studies, the Dutch SILC survey and a survey on the energy use and efficiency of the Dutch housing market.
Mr Stephen Kaputa (US Census Bureau) - Presenting Author
Ms Yukiko Ellis (US Census Bureau)
The U.S. Census Bureau implemented a responsive and adaptive design (RAD) procedure for data collection in the 2017 Economic Census with the goal of optimizing response rates across domains. This is the largest scale implementation of these methods, first tested and later adopted permanently by the Annual Survey of Manufactures. At a predetermined point in the collection cycle, nonresponding establishments receive a reminder letter either by certified mail (expensive) or standard mail (inexpensive) identified via an allocation that assigns a higher proportion of the certified letters to domains that have low unit response rates. This targeted allocation procedure balances a cost-quality trade off in real-time while ensuring all establishments receive some form of nonresponse follow-up. In this paper, we describe the challenges associated with performing a RAD procedure in a nationwide data collection effort that covers different economic sectors of varying size and characteristics, presenting analyses of the effects of the intervention on unit response rates and costs.
Miss Catherine Grant (-) - Presenting Author
The Crime Survey of England and Wales (CSEW) is a continuous face-to-face cross-sectional study conducted by Kantar Public on behalf of the Office for National Statistics.
Advanced letters are used on the CSEW to pre-notify sampled households that they have been selected for participation and that an interviewer will be calling round to conduct an interview. The letters include an unconditional incentive of a book of first class stamps with a unit cost of £4.02 for each book. These letters and unconditional incentive aim to increase co-operation participation rates by encouraging the residents of each sampled household to participate in the survey.
Around 53,000 addresses are issued into field each year for the CSEW; which means that providing an unconditional incentive is a costly exercise. In 2017 and 2018 we ran a number of experiments to test the effectiveness of alternative incentive strategies for the survey and their impact on the overall response rate.
Three alternative strategies were tested:
1. An unconditional stamp incentive combined with conditional £5 incentive provided post interview
2. A conditional £10 incentive provided post interview (with no unconditional incentive)
3. An unconditional tote bag incentive enclosed with the advance mailing.
The early experiments with monetary incentives suggested that the unconditional stamp incentive was important in driving response. The third experiment was therefore designed to test whether this was due to the stamps themselves or whether an alternative incentive could be offered.
This paper looks at the impact of these incentive strategies on response rates, level of effort required to achieve an interview and the proportion of respondents that reported having read the mailings (for unconditional incentives only).