ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Quality Assurance in Survey Data: Frameworks, Tools, and Quality Indicators

Session Organisers Dr Jessica Daikeler (GESIS- Leibniz Institute for the Social Sciences )
Mrs Fabienne Krämer (GESIS- Leibniz Institute for the Social Sciences )
TimeTuesday 15 July, 11:00 - 12:00
Room Ruppert 119

Survey data, collected through various modes such as online surveys, face-to-face interviews, and telephone surveys, is subject to a wide range of potential errors that can compromise data integrity. Addressing these challenges requires robust frameworks, advanced tools, and reliable quality indicators to manage, validate, and enhance survey data quality.
This session will focus on the key aspects of quality assurance in survey data collection and analysis, with a particular emphasis on the development and application of data quality indicators. We invite contributions that suggest and showcase quality indicators, designed to maintain the integrity and usability of survey data. Key topics will include:
1. Frameworks for Quality Assurance: An overview of frameworks developed to assess the quality of survey data.
2. Tools and Platforms for Data Validation: A discussion on tools and technologies aimed at validating or improving the quality of survey data as well as platforms tailored to combine tools, such as the KODAQS toolbox.
3. Data quality indicators: We seek contributions that demonstrate effective use of quality indicators like response bias indicators or data consistency checks in real-world case studies, showcasing how they address and enhance data quality.
4. Didactics of Data Quality Issues: Approaches to teaching and promoting data quality assurance for survey data. This section will explore educational strategies to equip researchers and practitioners with the necessary skills to effectively tackle data quality issues.

Keywords: survey quality tools, data quality, frameworks, quality indicators, training, didactics

Papers

How Long Does It Take to Complete a Web Survey: Calculating the Actual Response Speed as a Quality Assurance Indicator

Mr Luka Štrlekar (University of Ljubljana, Faculty of Social Sciences) - Presenting Author
Dr Vasja Vehovar (University of Ljubljana, Faculty of Social Sciences)

Web surveys can capture digital traces, known as paradata, which record respondents' activities while completing the questionnaire and provide insights into respondents' behavior. In practice, the most widely used type of paradata is response times (RTs) – the time required to complete a question, page, or entire survey. Respondents with short RTs are particularly commonly studied in relation to response quality, and survey duration is considered an important quality assurance indicator.

However, to accurately evaluate the relation between RTs and response quality, RTs must be properly analyzed. Although web survey tools typically allow for technically accurate measurement of RTs (at the page level), the main dilemma is whether they can be unreservedly used in further research. This issue is inadequately addressed in the literature, where simple surveys and engaged respondents are usually assumed.

Namely, RTs and related response speeds should reflect only respondents' cognitive processes and questionnaire characteristics, excluding confounding factors that could affect comparability: (1) pauses, multitasking behavior, and (2) backtracking mean that recorded RTs are misestimated, (3) answering open-ended questions artificially creates the appearance of slower response speed, while (4) not answering questions and (5) not being exposed to questions due to branching give the appearance of faster response speed.

By removing the confounding effects of these factors (e.g., subtracting the pausing duration), we develop a concept of “actual response speed”, which refers to the speed when respondents are engaged in the response process as their primary cognitive activity and are exposed to standardized cognitive tasks (questions). This ensures comparable survey conditions for all respondents and enables the proper use of adjusted RTs in further analyses, particularly for the calculation of the actual (i.e., “true”) response time, which serves as an important quality assurance indicator. We also develop standardized solutions for treating confounding factors using R.


Evaluating data quality in a mixed-mode establishment survey - Results of an experiment

Mrs Corinna König (Institute for Employment Research) - Presenting Author
Professor Joe Sakshaug (Institute for Employment Research)

Due to declining response rates and higher survey costs, establishment surveys are (or have been) transitioning from traditional interviewer modes to online and mixed-mode data collection. The IAB Establishment Panel of the Institute for Employment Research (IAB), which was primarily a face-to-face survey, also experimented with an online starting mode followed by face-to-face as part of a sequential mixed-mode design. The control group had the traditional face-to-face design. Previous analyses have shown that the mixed-mode design maintains response rates at lower costs compared to the face-to-face design, but the question remains to what extent introducing the web mode affects data quality. We address this research question through several analyses. First, by comparing 20 survey responses from the single- and mixed-mode experimental group to corresponding administrative data from employer-level social security notifications. Using administrative data, the accuracy of survey responses in both mode designs is assessed and measurement equivalence is evaluated. Second, by comparing social desirability answers between the individual online and the face-to-face modes. Third, by reporting on differences in triggering follow-up questions when answering filter questions and fourth, looking at item nonresponse in the different mode groups. To account for selection and nonresponse bias, selection weights are used throughout the analysis. First, results show that measurement error bias in online interviews is sometimes larger than in face-to-face interviews but compared to the mixed-mode design the differences are not significant anymore. Looking at sensitive questions on social desirability, it was generally found that respondents do not answer less socially desirably in the online mode. This applies with only a few exceptions. Thus, the study provides comprehensive insights into data quality for mixed-mode data collection in establishment surveys.


Increasing respondents' survey literacy: A knowledge intervention to detect misleading survey results

Professor Sven Stadtmüller (University of Applied Sciences Göttingen) - Presenting Author
Professor Henning Silber (University of Michigan)

Increasingly more surveys are conducted, and surveys are regularly used in political and economic decision making. Similarly, members of the public use survey results reported in the mass media to inform themselves about public opinion on political and economic issues. However, the increase in surveys is accompanied with a decline of survey quality and quality control. Even high-end news outlets often fail to distinguish between surveys that are conducted according to scientific standards and those that are not.
Recent research from Germany and the United States suggests that people's knowledge about survey quality is sparse. At best, members of the general public use heuristics regarding the sample size (the more, the better) and the sample composition (representativity equals high quality). However, important quality indicators such as the sampling method (i.e., random sample) and the response rate are rarely used. Additionally, little is known about how the general public evaluates survey quality and the trustworthiness of survey results when different quality information suggests different levels of survey quality. For example, if an individual receives a result from a survey which relies on a convenience sample (low quality) but has an impressive sample size (high quality), the recipient might either be inclined to use the sample size heuristic or rather base his or her level of trust on the low-quality sample.
Behind this background, this research uses an experimental design fielded in the German probability-based GESIS Panel to test a knowledge intervention. Specifically, the intervention briefly explains the relative importance of the sampling method and the sample size for the quality of a survey. Afterwards, we test (1) whether the intervention helped respondents to adequately discriminate “good” from “bad” surveys and (2) which individual-level factors (e.g., survey attitudes) moderate the effect of the knowledge intervention.