Data Collection Management: Monitoring Bias in a Total Survey Error Context |
|
Convenor | Mr Brad Edwards (Westat ) |
Data from a pretest can be used to produce estimates of questionnaire (specification) error, to make improvements to address the design problems, and to monitor error levels after changes are implemented in the main data collection phase of a survey. We demonstrate the feasibility of this approach on a recent CAPI survey. Problems with the survey protocol for completing a life events calendar were discovered in the pretest. The protocol was changed for the main data collection. CARI data are reviewed from simple random probability samples of pretest and main data collection interviews to determine whether the change was effective.
This presentation focuses upon recruitment to the national probability sample within the new UK birth cohort (Life Study, www.lifestudy.ac.uk), describing methodological interventions aimed at improving recruitment into the study and the continued engagement of participants. This methodological work forms part of an overall plan to understand the biases that can arise in the process of recruiting participants into a longitudinal study using a legally required “opt-in” approach. To mitigate the implications of the “opt-in” procedure, different strategies to encourage participation will be tested during a pilot phase using a split sample design.
This presentation will highlight examples of innovative methods for managing field operations and interviewer behavior within the Total Survey Error paradigm. Specifically, we will provide a wide range of examples of quality monitoring approaches from surveys conducted in a variety of contexts, including surveys conducted in transitional and developing countries. These examples will include new approaches in the collection and use of rich paradata (e.g., call records, audit trails), digital audio recording, ACASI, mobile technology as well as innovative uses of digital photography, GPS, and other anthropometric data collection methods. We will also provide examples of project dashboards.
Interviewers’ travel from their homes to respondents’ homes is the largest component of fieldwork costs. With GIS it is now possible to view travel routes in real time and to identify inefficient routing. From call records on interviewers’ connected devices, we can determine instantly whether interviewers are making visits at appropriate times of the day and days of the week, and can redirect interviewers to work on cases that have high priority. Using a recent example, I will show how these approaches can reduce costs and have the potential for reducing survey nonresponse for specific population groups.