All time references are in CEST
Item Nonresponse and Unit Nonresponse in Panel Studies 1 |
|
Session Organisers | Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories) Dr Ariane Würbach (LIfBi – Leibniz Institute for Educational Trajectories) Mr Michael Bergrab (LIfBi – Leibniz Institute for Educational Trajectories) |
Time | Wednesday 19 July, 09:00 - 10:30 |
Room | U6-01f |
Panel studies face various challenges, starting with establishing a panel, ensuring panel stability, minimizing sample selectivity and achieving high data quality. All these challenges are compromised by issues of nonresponse. Unit nonresponse may lead to small sample sizes (particularly if it occurs in the initial wave) as well as to panel attrition – for example if (recurrent) non-respondents are excluded from the sample due to administrative reasons. Item nonresponse implies reduced data quality, since it decreases the statistical power for analyses based on the variables of concern when respondents with missing information are excluded. It may, in extreme cases, lead to variables needing to be excluded from analyses due to their high proportion of missingness. Both, unit nonresponse and item nonresponse may introduce biases, either by increasing sample selectivity or by affecting the distribution of particular variables.
A societal crisis may increase these challenges in various ways. In the case of the COVID-19 pandemic, it may foster nonresponse for two reasons: It increases stress in the lives of our target populations and it limits the ability of panel studies to use interviewers for conducting personal interviews and for motivating respondents to participate.
We invite researchers to participate in this discussion, which may – among many others – include the following topics.
- Quantifying item nonresponse and unit nonresponse, including resulting selectivity.
- Measuring the development of item nonresponse and unit nonresponse over panel waves.
- Implications of item nonresponse and unit nonresponse on data quality.
- Strategies for reducing item nonresponse and unit nonresponse, e.g. by developing new question formats or response formats, introducing modified incentive schemes, offering different modes, or allowing mode or language switching.
- Problems related to such measures, e.g., comparability across panel waves.
- Handling item nonresponse and unit nonresponse, for example, by imputation of missing values or weighting.
Keywords: panel studies, item nonresponse, unit nonresponse
Dr Nicole Watson (University of Melbourne) - Presenting Author
Attrition is an inevitable part of any longitudinal survey. Many sample members participate every wave, while others drop out either permanently or temporarily. This paper examines the nature of attrition in the Household, Income and Labour Dynamics in Australia (HILDA) Survey which now incorporates 21 waves of annual data collection. Latent Class Analysis is used to separate the wave 1 sample members into nine respondent classes, each with different response profiles. Moving out of scope (e.g., through death or moving abroad) is treated as a separate outcome from non-response and allowance is made for cases with unknown eligibility (e.g., due to non-contact or no longer being issued to field). The largest class relates to loyal stayers who respond every wave or almost every wave. There are two groups of lurkers who respond the majority of the time but don’t drop out. There are two classes relating to people moving permanently out of scope (mainly though death). The remaining four classes relate to those permanently dropping out the sample (at various stages of the panel and make up 36% of the initial sample. The characteristics of the different classes are compared to the loyal class to help understand the biases that may result from these different forms of non-response.
Professor Susanne Vogl (University of Stuttgart) - Presenting Author
Mr Paul Malschinger (University of Vienna)
Professor Brigitte Schels (IAB)
Longitudinal transition studies are central to youth research but face particular challenges of panel attrition. While adolescents can be reached easily through institutions, after leaving school their life changes and it is more difficult to included them in further panel waves. This methodological paper explores factors that are related to different patterns of panel attrition. In addition to general factors of survey participation associated with (a) socio-demographic variables such as gender and social background, we also consider (b) aspects related to the survey topic which could influence patterns of attrition, such as career choice, future orientations as well as (c) the survey experience, i.e. the evaluation of the questionnaire in the first wave. We use data from a five-wave panel study with young people in Vienna. About 3,000 respondents participated in the first survey wave in their last year at lower-track secondary school and were then surveyed annually over the next four years (2018 – 2022). We detect a group of respondents remains continuously in the panel, some drop-out at some point and others temporarily do not participate and drop in later again. Using logistic regression models, the results show that the association of gender, migration background, parental occupation status as well as school grades are related to patterns of attrition because of differences in survey experience in the first wave. Our results can inform future studies regarding causes for and consequences of panel attrition.
Mrs Manuela Schmidt (Bonn University) - Presenting Author
Mrs Alice Barth (Bonn University)
Started in 2010, the Cologne dwelling panel is a panel designed to measure change in neighborhoods (Friedrichs & Blasius 2015; 2020). The fifth wave of data collection took place in 2022. Unlike a “classic” panel of persons or households, here, the dwellings constitute the units of investigation, with one inhabitant of a dwelling acting as its “speaker”. When a household moves out between waves, a new inhabitant of the same dwelling is interviewed in the following wave. This means that new persons enter the panel in each wave and others leave it. While the dwellings as units of investigation stay the same, their facilities may change with a change of inhabitants.
Like other panels, a dwelling panel suffers from attrition. The dwellings themselves cannot discontinue their participation, but their inhabitants can and will do this for various reasons. Studies of person or household panels have shown that attrition is not random. While most determinants of not participating in the next wave are highly variable depending on study design, fieldwork, and societal context, a move of the panel household often significantly increases the probability to attrite (Behr, Bellgardt & Rendtel 2005; Frankel & Hillygus 2014). In a dwelling panel, however, moves of individuals are invariably part of the design and need not be problematic in terms of sample composition when in-movers’ willingness to participate in the study is high.
In this presentation, we aim to assess attrition in a dwelling panel. How can individual and dwelling-specific characteristics be disentangled to study attrition determinants? We use dwelling characteristics such as size and location as well as inhabitants’ status (moved or not) to predict the probability of attrition and discuss the impact of attrition on sample composition and representativeness, as well as methods of drawing refreshment samples in a dwelling panel.
References:
Behr, A., Bellgardt, E., & Rendtel, U. (2005). Extent and determinants of panel attrition in the European Community Household Panel. European Sociological Review, 21(5), 489-512.
Frankel, L. L., & Hillygus, D. S. (2014). Looking beyond demographics: Panel attrition in the ANES and GSS. Political Analysis, 22(3), 336-353.
Friedrichs, J., & Blasius, J. (2015). The dwelling panel–A new research method for studying urban change. Raumforschung und Raumordnung| Spatial Research and Planning, 73(6), 377-388.
Friedrichs, J., & Blasius, J. (2020). Neighborhood change–results from a dwelling panel. Housing Studies, 35(10), 1723-1741.
Dr Blazej Palat (Sciences Po - CDSP) - Presenting Author
Mrs Marion Elie (Sciences Po - CDSP)
Dr Guillaume Garcia (Sciences Po - CDSP)
Dr Selma Bendjaballah (Sciences Po - CDSP)
Professor Nicolas Sauger (Sciences Po - CDSP)
Building on the literature on online panel participants’ classification according to their responsiveness and the effectiveness of retention strategies, we designed an experiment in the framework of a French probabilistic web-based panel ELIPSS. Two experimental groups were formed for the needs of the protocol set up during eight consecutive survey fieldworks (one year). While email reminders were consistently sent to the non-respondents from both groups, additional telephone call-backs were implemented only for one of them. The panellists were further differentiated into three classes according to their responsiveness to study invitations. We found that telephone call-backs were effective in increasing study response rates, but only to mildly disengaged panellists. This result is important for panel management optimisation and resource allocation in similar projects.
Dr Uta Landrock (LIfBi - Leibniz Institute for Educational Trajectories) - Presenting Author
Like probably all longitudinal studies, the German Educational Panel Study NEPS faces the problem of panel attrition. Now, after 14 survey waves, more than 50 percent of the initial respondents of the NEPS starting cohort of adults have dropped out. Analyzing the reasons for dropping out may help to understand and possibly prevent panel attrition. Our question is whether the reasons for non-participation change over the course of the panel. If this is the case, it would be appropriate and advisable to treat different survey waves differently, e.g., in terms of how respondents are contacted and approached.
To answer the question of whether the reasons for non-participation change, we examine final disposition codes before dropping out of the panel, considering all waves of the NEPS. In NEPS, we distinguish about 50 final disposition codes. We will include all the reasons for non-participation. Furthermore, we will apply the AAPOR standard definitions to assign the final disposition codes. The most relevant category in terms of panel attrition is ‘non-respondents’, with the distinction between ‘refusals and breakoffs’, ‘non-contacts’, and ‘other, non-refusals’. Another dimension is the differentiation between temporary and final dropouts. Final disposition codes lead directly to exclusion from the panel, while non-respondents with temporary disposition codes remain in the panel. After two waves of non-participations these target persons were excluded from the NEPS study. One important question is whether there are ‘at-risk’ disposition codes that finally lead to panel attrition.
Preliminary findings indicate that there is only little change across waves, suggesting that the phenomenon of dropout is stable over the course of a panel study. Consequently, in terms of reasons for non-response, there seems to be no need to adapt recruitment strategies across the waves of a longitudinal study.