Evaluating Quality of Alternative Data Sources to Surveys 2 |
|
Coordinator 1 | Dr Paul Beatty (U.S. Census Bureau) |
Alternative data sources (including but not limited to administrative records, passively collected data, and social media) are increasingly seen as potential complements to, and in some cases replacements for, self-reported survey data. These alternatives are especially appealing given rising costs and declining response rates of traditional surveys. Incorporating them into datasets may improve completeness and timeliness of data production, reduce response burden for respondents, and in some cases provide more accurate measurements than self-reports.
Nevertheless, the sources and extent of measurement errors within these alternative data sources, as well as reasons for missingness, are not always well understood. This can create challenges when combining survey and alternative data, or when substituting one for the other. For example, apparent changes over time could be artifacts due to record system characteristics rather than substantive shifts, and reasons for exclusion for administrative records may differ from reasons to decline survey participation. A clear understanding of the various errors and their consequences is important for ensuring validity of measurement.
This session welcomes papers on methodology for evaluating quality of alternative data sources, either qualitative or quantitative; that explore the causes, extent, and consequences of errors in alternative data sources; or that describe case studies illustrating challenges or successes in producing valid data sets that combine survey and alternative data sources.