ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Item Nonresponse and Unit Nonresponse in Panel Studies 2

Session Organisers Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories)
Dr Ariane Würbach (LIfBi – Leibniz Institute for Educational Trajectories)
Mr Michael Bergrab (LIfBi – Leibniz Institute for Educational Trajectories)
TimeWednesday 16 July, 16:00 - 17:30
Room Ruppert rood - 0.51

Panel studies face various challenges, starting with establishing a panel, ensuring panel stability, minimizing sample selectivity and, overall, achieving high data quality. All these challenges are compromised by issues of nonresponse. Unit nonresponse may lead to small sample sizes, particularly if it occurs in the initial wave. It also may lead to panel attrition: Besides active withdrawals, respondents might also drop out for administrative reasons, for example, if (recurrent) non-respondents are excluded from the sample. Item nonresponse implies reduced data quality, since it decreases the statistical power for analyses based on the variables in question when respondents with missing information are excluded from analyses. It may, in extreme cases, lead to variables needing to be excluded from analyses due to their high proportion of missing values. Both, unit nonresponse and item nonresponse may introduce biases, either by increasing sample selectivity or by affecting the distribution of certain variables.
New and alternative data sources may shed new light on the issue of nonresponse, given that it is not yet entirely clear how these developments will affect longitudinal data collection.
We invite researchers to participate in this discussion, which may – among many others – include the following topics:
- Quantifying item and unit nonresponse, including resulting selectivity,
- Measuring the development of item and unit nonresponse across panel waves,
- Implications of item and unit nonresponse on data quality,
- Strategies for reducing item and unit nonresponse, e.g. by developing new question or response formats, introducing tailored incentive schemes, or offering different modes,
- Problems related to such measures, e.g., comparability across panel waves,
- Handling item and unit nonresponse, for example, by imputing missing values or weighting,
- Using contact and paradata to avoid item and unit nonresponse by monitoring fieldwork during data collection.

Keywords: item nonresponse, unit nonresponse, panel data

Papers

Do talk money – Reducing income nonresponse in surveys

Ms Katharina Allinger (Oesterreichische Nationalbank) - Presenting Author
Ms Melanie Koch (Oesterreichische Nationalbank)

Item nonresponse can be a pervasive issue in surveys. Questions that are especially prone to nonresponse are questions about monetary values, like income. We implement an experiment to reduce nonresponse to income questions in an international household survey, looking at four different countries, where income nonresponse is very common.

In the experiment, survey respondents are always asked to report their exact household income first. Then, we randomize those who refuse to answer into two groups. In a follow-up question, the control group is asked to choose their income from a very granular list of at least 20 brackets. The treatment group is simply asked if their income falls into the first, second or third pre-defined income tercile. With the treatment, we want to test to what extent nonresponse to income questions can be reduced by lowering the number of brackets.

We expect nonresponse on exact income amounts to be caused by two main reasons: either the person does not know the exact amount or is not willing to share the amount because of privacy concerns. For both reasons, fewer brackets should be a remedy.

Indeed, in all four countries, the treatment leads to a significant decrease in nonresponse of between 11 to almost 28 percentage points. Moreover, the treatment seems to be especially effective for those people who were not willing to report the exact amount in contrast to those who were not able to answer it. We do not find large heterogeneous effect across different population subgroups, meaning nonresponse is reduced in all gender, age and educational groups. There is no positive spillover effect on the willingness to answer subsequent questions on exact amounts for personal income.

Thus, when condensed income data are sufficient, fewer answer options are a cost-effective way to reduce nonresponse.


Backing up a Panel with Piggybacking – Does Recruitment with Piggybacking lead to more nonresponse bias?

Mr Bjoern Rohr (GESIS Leibnitz Institute for the social Sciences) - Presenting Author

Sampling and recruiting respondents for (online) probability-based panels can be very expensive. One cost-intensive aspect of the process is drawing a separate sample and recruiting the respondents offline. To reduce this cost, some mixed-mode or online panels (e.g., the GESIS Panel, the German Internet Panel, and the NatCen Panel) relied on piggybacking in some recruitments or refreshments. Piggybacking means that participants for the panel are recruited at the end of another probability survey so that no additional sample has to be drawn. Though this might reduce the cost, it might also introduce additional nonresponse. Whether or not the higher amount of nonresponse also translates to higher amounts of bias in practical applications of a piggybacking survey will be analyzed in my research. To answer the research question, we use the GESIS Panel, a panel survey that was initially recruited in 2013 (n = 4961) from a separate sample but later refreshed three times with the help of piggybacking (n = 1710, 1607, 764). This setting allows us to compare the bias of both survey types against each other and, with the help of German Microcensus benchmarks, disentangle the nonresponse bias introduced by piggybacking. The bias will be measured as a relative bias for demographic and job-related variables, as well as the difference in Pearson’s r between benchmark and survey. Initial results indicate that univariate survey estimates from piggybacking surveys are more often biased directly after recruitment from the parent survey, compared to estimates from a separate recruitment. However, those differences reduce after separate waves in a panel survey, indicating that piggybacking surveys are less affected by panel attrition. Regarding Pearson’s r estimates, our analyses show mixed results.


Impact of Increased Survey Frequency on the Participation of Older Respondents in Longitudinal Surveys

Dr Michael Bergmann (SHARE Berlin Institute (SBI)) - Presenting Author
Mrs Magdalena Quezada (SHARE Berlin Institute (SBI))

To increase flexibility and to be able to respond quickly to new developments, the Survey of Health, Ageing and Retirement in Europe (SHARE) envisages supplementary surveys with additional thematic modules in the year between the biennial core survey waves. This approach will accommodate new contributions to questionnaire content, ensuring that SHARE remains interesting and relevant to researchers across Europe and beyond. At the same time, there is a concern that these additional in-between surveys will increase respondent burden and thus reduce respondents' willingness to participate in future panel waves. Previous studies are inconclusive in this regard, particularly in relation to older people, who are more likely to have health problems and may therefore be more sensitive to the increased burden of more frequent survey invitations.
To address the question of whether increased survey frequency has a negative impact on the future participation of older respondents in longitudinal surveys, we analyze SHARE data from Waves 8 and 9. Between these face-to-face waves, SHARE conducted two SHARE Corona Surveys by telephone, which could not be fully implemented in all countries. As in these countries a regionally stratified random sub-sample was drawn to select participants for the SHARE Corona Surveys, this provides a quasi-experimental condition to properly compare the participation rates in Wave 9 of randomly selected respondents who participated in the in-between surveys with those who did not.
Preliminary results show that more surveys per se do not lead to higher attrition rates in an ongoing panel study. Rather, it appears that topics of interest to respondents are not perceived as overly burdensome. The results of our study go beyond the applicability to SHARE and can provide valuable information for the design of panel surveys (of older people) in general.