NEW DEVELOPMENTS IN ADAPTIVE SURVEY DESIGN (PART II) |
|
Session Organiser | Professor Barry Schouten (Statistics Netherlands and Utrecht University) |
Time | Friday 9 July, 15:00 - 16:30 |
Adaptive survey designs have been researched extensively over the last ten years and are now seen as standard options at various statistical institutes. This session presents papers in which new developments are presented. These concern open research questions as how to optimally stratify the target population, how to learn efficiently from historic survey data, how to optimally define stopping rules and phase capacity, how to include measurement error, how to extend ASD to sensor surveys, how to assess interviewer effects without interpenetration, and how robust is ASD to pandemics like COVID-19.
The session is a closed session and consists of two parts of in total nine papers from four institutions. This is part 2.
Keywords: Nonresponse; measurement error; tailoring; survey design
Dr Kees van Berkel (Statistics Netherlands) - Presenting Author
Dr Nino Mushkudiani (Statistic Netherlands)
Professor Barry Schouten (Statistics Netherlands and Utrecht University)
Mr Hamza Ahmadan (Statistics Netherlands)
Since 2018, an adaptive survey design has been implemented successfully for the Dutch Health survey. In this survey, a sequential mixed mode strategy CAWI followed by CAPI observation is applied. The design feature to adapt is the CAPI follow-up. The design is oriented at reduction of nonresponse bias by selective CAPI follow-up. However, mode-specific measurement effects were not taken into account when developing the survey design.
This presentation presents a re-interview experiment that aims to separate and estimate mode-specific measurement and selection effects. We use historical data to identify groups in the population with the largest differences between CAWI and CAPI for the most important target variables. For these groups, re-interviews with CAWI respondents are conducted. The re-interview samples are randomly divided into two parts: one for re-interviews through CAWI and one for re-interviews through CAPI. The web re-interview is included as a control group.
We discuss the design and use of the re-interview for adaptive survey design. We demonstrate how re-interview sample sizes are computed based on a number of mode-specific measurement effect scenarios. Furthermore, we provide estimates of the anticipated costs and the time span to carry out the experiment. The re-interview questionnaire is explained and we discuss the choice of measurement benchmark mode. Finally, we discuss how the adaptive design is created from the estimates.
Mr Kevin Tolliver (US Census Bureau) - Presenting Author
The Survey of Income and Program Participation (SIPP) is a face-to-face longitudinal household survey conducted yearly by the U.S. Census Bureau. During a typical data collection period, interviewers are asked to conduct interviews in-person over a four month-span, spanning late-winter and early-spring months. In 2020, the SIPP sampled 53,000 households that were approximately half new sample and half returning sample. As a result of the pandemic, midway through data collection field interviewers were instructed to stop all in-person interviews and conduct only phone interviews. While some returning sample had phone numbers listed from their prior interviews, no new sample had any contact information available. As such, the SIPP had to rely on administrative data to contact new sample and conduct interviews. This work discusses how the SIPP 2020 adaptive design compared to years prior.
Dr Nino Mushkudiani (Statistics Netherlands) - Presenting Author
Professor Barry Schouten (Statistics Netherlands and Utrecht University)
In this paper we apply a Bayesian framework for modelling the survey design parameters within the context of adaptive survey design to the Dutch Health Survey (DHS) data. This framework has an advantage of making possible to incorporate prior knowledge or historical estimates into these models. In order to capture response or target variable trends we look at the DHS data of several years (2014-2018) consider different time independent and time dependent scenarios for prior models. Time independent prior model assumes that parameters do not change in time, while time dependent prior model assumes that parameters change in time and hence is based on the most resent historic data. Furthermore we include the auxiliary data and target variables of the DHS data in this framework. We want to investigate: how sensitive is a Bayesian analysis of data collection to time change; and how should we choose the length of historic survey data. We also aim to define an optimal strategy allocation, in terms of quality and cost indicators, such as budget, response rate or measurement errors. We derive quality and cost indicators using models defined through a Bayesian framework for different strategies.
Professor Barry Schouten (Statistics Netherlands and Utrecht University) - Presenting Author
Dr Annemieke Luiten (Statistics Netherlands)
Dr Vera Toepoel (Utrecht University)
Dr Katharina Meitinger (Utrecht University)
Smart surveys employ one or more features of smart devices: local storage and processing options of the device, internal sensors, linkage to external sensor systems, access to public online data, access to personal online data and/or consent to linkage of personal data. In many health surveys physical activity is measured through a series of questions on types and durations of a number of activities. These measurements are known to be relatively inaccurate. Physical activity can also be measured through dedicated wearables devices. Although such devices can be re-issued multiple times, they still imply an increase in survey budget. In order to balance cost and measurement quality, adaptive survey designs are a promising approach. In this paper, we perform analyses based on a study that used both questions and sensor measurements and sketch the contours of an adaptive smart health survey.