All time references are in CEST
Assessing the Quality of Survey Data 2 |
|
Session Organiser |
Professor Jörg Blasius (University of Bonn) |
Time | Wednesday 19 July, 16:00 - 17:30 |
Room | U6-08 |
This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.
Keywords: Quality of data, task simplification, response styles, satisficing
Mrs Kari-Anne Lund (Statistics Norway)
Mrs Aina Holmøy (Statistics Norway)
Mrs Emma Schiro (Statistics Norway) - Presenting Author
Child respondents down to the age of 9 compose a substantial part of the samples for several of Statistics Norway’s (SSB) social surveys. Until recently, this sub-group has received little attention from survey designers at SSB.
During the last few years, several social surveys were redesigned substantially, by moving from CATI to CAWI, from single mode to mixed mode, from omnibus surveys to separate surveys, in addition to development of new web apps for survey data collection.
Previous strategies for treating child respondents in these surveys, left us with little control of their response process. It was not specified if, or when, children should respond to questions independently, with the help of an adult, or an adult should respond for the child.
Child respondents are now instructed to answer questions themselves, after redesigning our social surveys. Only when necessary, they are prompted to request guidance from an adult. Therefore, our questions need to be understandable for a 9-year-old to answer independently. This has broad implications to the whole questionnaire design. Do we design separate questionnaires for children? Or, will the adaptations made for children reduce response burden and improve the overall data quality for all age groups?
A targeted final question has been added to measure to which extent children responded independently, with the help of an adult, or if an adult answered for them, to analyze response behavior and data quality.
In this paper, we investigate how our new strategy for approaching child respondents affect a) survey communication, b) questionnaire design, and c) user testing. We also analyze how the new strategy may affect d) data quality and e) representativity. Our analyzes contribute to a shared understanding of how targeted design for children to answer independently in social surveys, to ensure both good
Ms Karolina Predotova (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Mr Thomas Skora (GESIS - Leibniz Institute for the Social Sciences)
Mr Tobias Gummer (GESIS - Leibniz Institute for the Social Sciences)
Mr Elias Naumann (GESIS - Leibniz Institute for the Social Sciences)
Societal crises are accompanied by an increased need for readily available but also reliable information on changing living conditions in the population. During the Covid-19 pandemic, this need led to many data collection projects being conducted. Many of these projects had to deviate from established survey designs to comply with contact restrictions and provide results quickly. Examples of these methodological changes include the increased use of non-probability samples and the shift in mode from interviewer-administered to self-administered surveys. Although analyses of the influence of design features on data quality exist, a systematic overview of the survey landscape and an assessment of the quality of these short-term data collections is lacking so far. Focusing on Germany, we aim at addressing this research gap.
We collected a unique dataset (N ~ 600) of social science surveys conducted in Germany between March 2020 and December 2021. For each survey, we coded information on survey design features such as target population, sampling procedure, sample size, use of incentives, and outcome rates. Drawing on this dataset, we start by providing a rich description of surveying in Germany during the Covid-19 pandemic. Subsequently, we evaluate data quality of the German survey landscape in the observed period. For this purpose, we rely on quality assessments for the most important design decisions derived from the current literature.
Altogether, our study identifies recent methodological trends and discusses their relationship with data quality. The findings are relevant for survey practitioners and methodologists as they provide practical insights for future survey projects.
Ms Sophie Cassel (The SOM Institute, University of Gothenburg)
Ms Maria Andreasson (The SOM Institute, University of Gothenburg)
Ms Alexandra Garcia Nilsson (The SOM Institute, University of Gothenburg) - Presenting Author
Self-selected samples are generally cheaper and easier to recruit and maintain than probability-recruited samples. But the generalizability of results from self-selected samples has been questioned. In this study, the generalizability of experimental treatment effects across four different sampling strategies was assessed by administering three well-known psychological experiments to probability-based and non-probability-based samples of the Swedish Citizen Panel. The replicated experiments were taken from the ManyLabs project in which researchers replicated several psychological experiments across different contexts. Results from the experiments in the SCP were compared both in samples prestratified on sex, age, and education and samples not prestratified. Cohen’s d was used to compare effect sizes between SCP, the original findings, and the ManyLabs replications. Across all samples (i.e., the probability and non-probability-based samples), none of the estimated effect sizes differed. Furthermore, the estimated effect sizes did not differ depending on whether the samples were prestratified or not. In line with other similar replication studies, the effects estimated in the SCP were smaller than the original published results. These results indicate that self-selected samples in the SCP produce similar results as probability-recruited samples and can validly be used for psychological experiments.
Dr Piotr Jabkowski (Faculty of Sociology; Adam Mickiewicz University, Poznan)
Professor Ulrich Kohler (Faculty of Economics and Social Sciences, University of Potsdam) - Presenting Author
Dr Marta Kołczyńska (Institute of Political Studies of the Polish Academy of Sciences, Warsaw)
Survey methodologists have a long-standing consensus that response rates for surveys relying on probability samples of general populations have decreased over time, regardless of sampling design, survey mode, survey topic, or country. For the case of probability samples, these decreasing response rates either lead to a loss of precision due to smaller sample sizes or a rise in survey costs due to the necessity to increase the gross sample. The increasing survey costs and the continuing decline in response rates have led pollsters and other survey practitioners to opt-out of probability sampling in favour of non-probability sampling with appropriate weighting. According to this view, we should start with non-probability sampling and invest the cost savings into developing statistical countermeasures of selection biases.
This presentation aims to check whether the decrease in response rates over time caused a decline in sample quality. Our presentation addresses this question by analyzing methodological data describing 776 surveys from four cross-national survey projects: European Quality of Life Survey, European Social Survey, European Values Study, and International Social Survey Programme, between 1999 and 2020. Based on a theoretical model of factors that shape unit nonresponse and unit nonresponse bias, we estimate the causal effects of historical time on both nonresponse and nonresponse bias, as well as the contribution of nonresponse to nonresponse bias. Analyses show that the decline in response rates was not accompanied by a drop in nonresponse bias, which is a promising result for all social survey users.