All time references are in CEST
Smart surveys: Data collection and logistics 1 |
|
Session Organisers | Dr Anne Elevelt (Statistics Netherlands) Dr Bella Struminskaya (Utrecht University) Dr Vera Toepoel (Statistics Netherlands) |
Time | Tuesday 18 July, 16:00 - 17:00 |
Room | U6-22 |
Smartphones and sensors can be used to extend traditional data collection. Smart surveys combine primary and secondary data collection and are a hybrid from between traditional types of data (e.g. survey data) and new forms of data (e.g. sensor data and other forms of big data). This includes smartphone apps, external sensors (e.g. wearables or smart meters) and data donation. Smart surveys aim at easing the response task, decreasing the respondent burden and/or improving data measurement accuracy.
Smart surveys are thus very promising, but there are still many questions about how we can successfully apply them on a large scale. Do people really want to participate? How do we convince them? And will they continue to participate if a survey takes several days? In this session we focus on how on collect data in large-scale smart survey projects. We would like to share experiences from ongoing pilots and large-scale projects, and open up a discussion so that we can better exploit smart surveys potential.
In this session, we invite papers that focus on the logistics and data collection strategies of smart survey projects. This includes topics such as:
- Examples of logistics and data collection strategies
- Current practices, implementations or survey protocols
- Design choices in smart survey projects
- Sampling and contacting respondents
- Recruitment strategies, such as interviewers, letters and incentives
- Strategies to reduce drop-out and increase respondent motivation over time
In addition, all other topics relevant to the session are welcome. For questions please contact the session organisers.
Keywords: Smart surveys; Data collection; Logistics; Recruitment; Motivation.
Dr Kevin Tolliver (U.S. Census Bureau) - Presenting Author
Stopping rules in survey data collection may be a very practical method of controlling costs and managing surveys. The Survey of Income and Program Participation (SIPP) has stopped, meaning not allow further contact attempts, certain cases in large batches in the 2019-2022 data collections. The survey strategically chose which cases to stop so that the final data is the most valuable to data users. This type of intervention has become a very integral part of the adaptive survey design procedures for the survey. This paper examines stopping work in the SIPP; examining proxy-costs savings associated with stopping work, efficiency of the interviewing staff, possible bias implications, and the potential for increasing response rates on neighboring cases.
Mr Johannes Gaul (University of Mannheim)
Professor Davud Rostam-Afschar (University of Mannheim)
Mr Thomas Simon (University of Mannheim) - Presenting Author
We examine the optimal design of a survey invitation letter that specifically addresses corporate decision-makers. Varying five key elements of a survey invitation letter, we implement a full-factorial experiment with adaptive randomization instead of static group composition. Specifically, we apply randomized probability matching that allocates more observations to invitation letters that yield higher participation rates as the experiment progresses. Using such a method of multi-armed bandit (MAB) optimization, we find that personalizing the letter, emphasizing the authority of the sender and pleading for help increase survey participation rates while stressing strict data protection and altering the position of the survey URL do not have a response-enhancing effect. Due to its desirable properties, MAB optimization offers opportunities to conduct more field experiments in related scientific disciplines such as management accounting.
Mr Joeri Minnen (hbits VUB) - Presenting Author
Dr Theun Pieter van Tienoven (VUB-TOR)
Today, National Statistical Institutes (NSIs) face challenges and changes in the way they produce official statistics. On the one hand, technological developments create the need for paradigm shifts in methodology. On the other hand, modern societal changes and challenges create new user demands for high-quality data and statistics.
Amidst these challenges and changes, modernisation initiatives should be supported by trustable, shareable, and scalable processes taking into account ‘smart’ ways to collect data, while these processes must remain standardized for reasons of comparability, yet modular enough to meet specific needs and be disseminated quickly. At the European level a collaboration between Eurostat and the NSIs has resulted in the European Statistical System (ESS) which is aimed at enhancing the strengths of harmonised statistical methods (e.g., comparability).
The Household Budget Survey (HBS) and Time Use Survey (TUS) are two European surveys who would substantially benefit from new technological developments. These surveys face challenges in (at least) for domains: (i) the user interface (UI) and user experience (UX) design of a tool to collect data, (ii) the back-office software or platform design to manage and organize data collections, (iii) the creation of a shareable architecture to run the tool and the platform across the NSIs, and (iv) the development of a legal framework to properly frame the modernization and digitization of data collection.
This contribution aims to report on a hands-on example to support data comparability and harominsation: the MOTUS data collection platform. Besides giving an introduction on MOTUS the main part is to map MOTUS on the Generic Statistical Business Process Model (GSBPM) which, as a tool for process description, is aimed at sharing statistical production processes in a standardized way.