ESRA 2025 Preliminary Glance Program
All time references are in CEST
Dealing with biases and memory effects |
Session Organiser |
Dr Daniel Seddig (KFN)
|
Time | Friday 18 July, 09:00 - 10:30 |
Room |
Ruppert 119 |
-
Keywords: Bias, split design, memory effects
Papers
Memory Effects in Word Recall Tests. Insights From a Randomized Split Design.
Mr Alexander Schumacher (SHARE BERLIN Institute) - Presenting Author
Panel effects pose a dilemma to survey methodology: For all the analytic potential repeated measurements offer, the repetition itself can bias results. This is especially true for measures that give the respondent an opportunity to improve by learning, such as memory tasks. The Survey of Health, Ageing, and Retirement in Europe offers a unique opportunity to study such panel effects in memory tests. The study has administered a word recall test to respondents in a split design, repeating the measurement roughly every two years and randomly assigning one of four word lists for the respondents to recall.
This randomization enables us to compare respondents who received the same list in two consecutive interviews to respondents with differing lists, which constitutes a reliably randomized experiment. Studying this comparison might illuminate how repeated measurements might impact memory test results even across several years. This project aims to identify such potential panel effects and will hopefully be able to inform survey professionals and researchers on how to take them into account in their research.
Incentives, attentiveness and education bias: an analysis of an online survey in Spain
Dr Mónica Méndez-Lago (CIS-Centro de Investigaciones Sociológicas) - Presenting Author
Dr Álvaro Suárez Vergne (CSIC-Consejo Superior de Investigaciones Científicas)
Dr Héctor Cebolla-Boado (CSIC-Consejo Superior de Investigaciones Científicas)
The presentation deals with response bias, through an analysis of the impact of providing an incentive in the response behaviour of different types of respondents, depending on their education attainment/socio-economic resources.
We use data from an online survey carried out in Spain as part of the preparatory work for a follow-up audit experiment set up to examine potential discrimination in the access of ethnic minorities from a variety of countries (Morocco, Turkey, Pakistan, Senegal, Congo and Nigeria) to different markets (childcare, housing and the labour market). The aim of the preparatory survey was to evaluate different combinations of names and geographical origins, selected mostly using information from the Spanish population register, to make sure the names that would be finally included in the audit experiment were associated by the general population with the intended country/geographical area.
The survey design included an experimental condition regarding respondents’ attentiveness, a topic which is generating concern especially in self-completion surveys. Half of the respondents (randomly selected) were informed that those respondents who “guessed” a higher number of matches between names and geographical/country origin would enter a lottery of a 100€ voucher.
We find that individuals who were offered this incentive devoted more time to the survey, and we examine this effect across groups defined by educational attainment and socio-economic status. Regarding the responses obtained, we found that offering the incentive increased the overall percentage of “correct” associations between names and geographical origins, but its impact was greater among the higher educated, which was not the intended aim of the incentive. We discuss these findings and reflect on their general implications in the design of surveys, and whether using this type of incentive is helpful to tackle education bias, or whether it can even make it worse.
As response rates decline, do cell-weights improve the accuracy of a time-series cross-section study? An assessment of weighting strategies as a means to decrease four decades of potentially increasing nonresponse bias
Ms Cornelia Andersson (The SOM Institute) - Presenting Author
Ms Freja Wessman (The SOM Institute)
Dr Sebastian Lundmark (The SOM Institute)
One of many strategies used for combatting nonresponse bias is post-survey adjustments, also known as weighting. The SOM Institute has conducted cross-sectional probability-based surveys in Sweden since 1986. In its inception, the SOM Institute’s surveys regularly enjoyed response rates in the 70-80 range. But since the early 2000s response rates have declined rapidly, and, despite many added interventions to stimulate response rates, the Institute rarely gets response rates above 50%. Making matters worse, different groups of people, such as less educated, have started to not respond at a more rapid pace than other groups, adding apparent increased nonresponse bias to any time-series comparisons (Lundmark & Backström, 2023). To explore whether the decreasing response rates and the increasing demographic nonresponse bias have led to biased time-series point estimate comparisons, a multitude of cell-weighting schemes—making use of Sweden’s easily obtainable population registry information—were assessed. These post-survey adjustments were assessed in terms of reduced Root Mean Squared Errors (RMSEs) on driving license data, voting behavior, and voter turnout for all election years in Sweden between 1991-2022. The weights that yielded the greatest decrease in RMSE were then assessed in terms of impact on point estimates on many time-series outcomes such as confidence in politicians, newspaper reading, political interest, trust in the universities, and different self-reported behaviors. The results revealed that, despite the cell weights decreasing the RMSEs (especially since the onset of the rapidly declining response rates of the post-2000s), almost none of the point estimates were substantially affected by these weighting strategies. That point estimates on a large variety of outcomes remained unaffected by the weights, suggests that the increasing demographic nonresponse bias may be more at random than what some would expect.
Using Missing Data Strategies to Correct for Biases in Political Self-Selection Questionnaires
Ms Saskia Bartholomäus (GESIS)
Dr Katharina Pfaff (University of Vienna) - Presenting Author
Professor Sylvia Kritzinger (University of Vienna)
Although political science strives to understand reasons and consequences of political participation, empirical research is limited by the overrepresentation of politically interested respondents in political science surveys. Recent research has shown that including non-political question modules for respondents with low political interest may reduce the nonresponse bias in political science surveys. However, researchers are often not aware of respondents’ topic interests. Instead of assigning them to different modules based on presumed political interest, researchers could allow respondents to self-select the question module they wish to answer. Self-selection questionnaires have been shown to enhance panel participation and improve data quality. Yet, this approach may introduce a potential bias, as individuals with lower political interest are less likely to select political modules compared to those with stronger interest in politics. This self-selection could distort substantive findings. To correct estimates, employing missing data strategies that account for varying response propensities holds promise in correcting such biases. In this study, we examine whether political self-selection questionnaires bias substantive results and whether missing data strategies reduce biased estimates caused by political self-selection questionnaires. To answer both questions, we rely on a survey experiment conducted in the probability-based Digitize! Online Panel Survey. Respondents, who all answered a module with questions on politics, were randomly assigned to either a treatment with fixed-order modules or a treatment with the option to select different modules. Results provide insights into how to improve underrepresented groups’ response rates by using self-selection questionnaires without compromising data quality through selection bias.