All time references are in CEST
Optimizing Follow-up: Innovations in Case Prioritization, Non-response Follow-up, and Following Movers |
|
Session Organisers | Dr Brady T. West (University of Michigan) Dr James Wagner (University of Michigan) |
Time | Tuesday 18 July, 09:00 - 10:30 |
Room |
The move into a multimode survey design, especially for longitudinal/panel data collections, can present opportunities for generating data that is higher quality, more timely, more cost effective, and with reduced respondent burden. However, new challenges related to multimode data collection emerge when incorporating CAPI or CATI with Internet Self-Response (ISR). Researchers must now account for workload composition by mode, sample bias, non-response follow-up, and, in the case of longitudinal surveys, following respondents who move between waves.
In this session, we encourage submissions from papers that address questions related to maximizing survey response, effort, and quality in a multimode data collection setting. Especially for longitudinal surveys, how can CAPI/CATI be leveraged to provide focused non-response follow-up for the hardest to reach cases while leveraging ISR to provide cost effective data collection? What does following mobile panel members look like when incorporating multimode data collection into longitudinal data collection? In a web-push survey design, how should researchers structure case prioritization to ensure a representative sample using CAPI/CATI follow-up if nearly all ISR non-response follow-up is deemed ‘high-priority’?
Keywords: Multimode, Longitudinal, Internet, Movers, Panel
Dr James Wagner (University of Michigan) - Presenting Author
Dr Andy Peytchev (RTI International)
Dr Xinyu Zhang (University of California, Los Angeles)
More costly data collection methods may not be affordable for the full sample. Rather than randomly assigning sample members to the more expensive method, responsive and adaptive survey designs make use of incoming data from the field to inform changes to the protocol. These incoming data are often converted into model outputs that are then used as the basis for decisions. For example, estimated response probabilities might be used to inform decisions about which cases should (or should not) receive additional recruitment effort. However, these model-based inputs (i.e. probabilities) are always conditional on the specification of the statistical model.
In the published literature in this area, very little emphasis has been placed on understanding the impact of the model selection procedures and the strength of available predictors. We have undertaken a simulation study to address this gap. We use data from the American Community Survey, a survey with very high response rates and large sample sizes. We employ three different modeling strategies: 1) paradata-based models that make predictions based upon variables about level of effort strongly related to response rate, 2) “bias” models that use only predicted survey variables as predictors (essentially acting as balance indicators), and 3) “MSE” (mean squared error) models that balance between paradata and bias models. We then use the predictions from each of these modeling strategies to allocate follow-up effort across the active cases. There is also a fourth control condition that assigns equal effort across cases. We also vary the strength of the available predictors to see if this changes conclusions. Finally, we simulate the impact of different levels of budget. We examine differences in the resulting bias, MSE, and several data quality indicators across the modeling approaches, predictors, and budgets.
Dr Hafsteinn Einarsson (University of Iceland) - Presenting Author
Survey organizations aiming to improve response rates in the later stages of fieldwork often attempt refusal conversions. However, reestablishing contact with units that have refused participation at prior stages of fieldwork may prove costly and time consuming. In this article, the potential of using a refusal conversion procedure in a single contact is investigated. In a mixed-mode survey of young adults in Iceland, individuals who were contacted by tele- phone and refused participation were instantly offered the option of self-completing the survey on the web at their time of choice. Results indicate that this procedure can meaningfully improve response rates in a survey of young adults in Iceland. The additional respondents gathered using this procedure were mostly similar to those who responded by using other modes when considering demographic background and responses to survey items.
Mr Sergio D. Martinez-Martinez (University of Michigan) - Presenting Author
Mrs Heather M. Schroeder (University of Michigan)
Mrs Evanthia Leissou (University of Michigan)
Mr Brady T. West (University of Michigan)
Declining response rates (RR) in longitudinal surveys, as seen in the Health and Retirement Study (HRS) where panel RR fell from 88% in 2010 to 74% in 2020, threaten data quality. To combat this trend, this study evaluates two responsive survey design strategies in the 2022 HRS: case prioritization and an “endgame” incentive offer.
The case prioritization used an influence measure (IM) calculated across key variables to assess each active case's potential to reduce nonresponse bias. Indicator variables (-1, 0, or 1) were computed based on whether the IM was below zero (worsened estimates), exactly zero (no effect), or above zero (improved estimates), respectively. The net sum of these indicators was calculated, with cases scoring above zero being prioritized as they would be expected to improve more estimates than they do not.
For the endgame strategy, eligible panelists who had not yet responded in 2022 were randomly assigned to either a treatment or control group. Upon reaching the 12th face-to-face or telephone contact attempt, those in the treatment group received a letter promising an additional $100 incentive upon completing the interview.
The findings demonstrate that both strategies were effective. Case prioritization significantly increased the RR among prioritized cases (37% vs. 29%), particularly among those with lower education, lower employment, and more limitations with daily living activities. The endgame offer significantly increased RR in the treatment group (31% vs. 23%), especially among younger cohorts and higher-educated respondents. Additionally, treated respondents required fewer contact attempts (6.9 vs. 8.1).
These results emphasize the importance of case prioritization to ensure that effort is focused on cases most likely to improve survey estimates. At the same time, targeted incentive increases help to secure responses from hard-to-reach respondents in panel surveys, ultimately reducing fieldwork costs and enhancing representativeness.