All time references are in CEST
Item Nonresponse and Unit Nonresponse in Panel Studies 2 |
|
Session Organisers |
Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories) Dr Ariane Würbach (LIfBi – Leibniz Institute for Educational Trajectories) Mr Michael Bergrab (LIfBi – Leibniz Institute for Educational Trajectories) |
Time | Wednesday 19 July, 11:00 - 12:30 |
Room | U6-21 |
Panel studies face various challenges, starting with establishing a panel, ensuring panel stability, minimizing sample selectivity and achieving high data quality. All these challenges are compromised by issues of nonresponse. Unit nonresponse may lead to small sample sizes (particularly if it occurs in the initial wave) as well as to panel attrition – for example if (recurrent) non-respondents are excluded from the sample due to administrative reasons. Item nonresponse implies reduced data quality, since it decreases the statistical power for analyses based on the variables of concern when respondents with missing information are excluded. It may, in extreme cases, lead to variables needing to be excluded from analyses due to their high proportion of missingness. Both, unit nonresponse and item nonresponse may introduce biases, either by increasing sample selectivity or by affecting the distribution of particular variables.
A societal crisis may increase these challenges in various ways. In the case of the COVID-19 pandemic, it may foster nonresponse for two reasons: It increases stress in the lives of our target populations and it limits the ability of panel studies to use interviewers for conducting personal interviews and for motivating respondents to participate.
We invite researchers to participate in this discussion, which may – among many others – include the following topics.
- Quantifying item nonresponse and unit nonresponse, including resulting selectivity.
- Measuring the development of item nonresponse and unit nonresponse over panel waves.
- Implications of item nonresponse and unit nonresponse on data quality.
- Strategies for reducing item nonresponse and unit nonresponse, e.g. by developing new question formats or response formats, introducing modified incentive schemes, offering different modes, or allowing mode or language switching.
- Problems related to such measures, e.g., comparability across panel waves.
- Handling item nonresponse and unit nonresponse, for example, by imputation of missing values or weighting.
Keywords: panel studies, item nonresponse, unit nonresponse
Professor Achim Goerres (University of Duisburg-Essen)
Mr Jonas Elis (University of Duisburg-Essen) - Presenting Author
Professor Sabrina Jasmin Mayer (University of Bamberg)
Why are immigrant-origin voter groups less likely to turnout in established democracies? Previous studies already demonstrated that the explanation models underlying individual voting participation are the same across immigrant-origin and native voters, but could not explain why the turnout gap between both groups persists. A growing problem in election surveys is, however, that measures of turnout and other socially desirable political behaviours are increasingly affected by problems with overreporting. This paper integrates theoretical notions of explanations for turnout and identifies three central explanations for these group differences: (a) socialisation experiences (immigrant-origin voters are less socialised into political activity when their parents are less integrated politically), (b) resources (immigrant-origin voters have fewer resources that matter for voting, such as economic resources, education, or knowledge about the political system) and (c) mobilisation (political parties mobilise voters differently, depending on strategic considerations and residence patterns). Using longitudinal data from the IMGES II, conducted in the city of Duisburg during the 2021 Bundestag election, we test a three-phase model of the political life cycle on random samples from several immigrant-origin groups as well as natives. The survey thus has to deal with effects of unit-nonresponse, unequal patterns of item-nonresponse between immigrant-origin groups and problems with overreporting of retrospective turnout. In addition, there are obstacles posed by the COVID-19 pandemic. Dependent variables measured in the survey (e.g. political interest, turnout) can be correlated with the probability to participate in the interviews at different stages of the longitudinal two-stage survey design. This contribution describes the design of the IMGES II and demonstrates how methodological challenges and their solutions impact unit- and item-nonresponse. Furthermore, possible solutions to the problem with overreporting are discussed.
Ms Lisa Walter (DeZIM) - Presenting Author
Dr Jannes Jacobsen (DeZIM)
Dr Mujtaba Isani (DeZIM)
Previous studies have highlighted that unit and item nonresponse can become a significant problem for survey research, particularly if it occurs in the first wave of a panel study and is not random. A review of the literature on this topic reveals different reasons for high nonresponse rates. They can, for example, be related to conceptual or practical factors, but also to (societal) crises and an associated increased level of (individual) stress. In the proposed article, we add to the existing literature by addressing the question of how self-experienced racism can affect nonresponse and dropout rates in panel studies. Therefore, we analyze the (non)response rates of the first two waves of the NaDiRa.panel, which will be newly implemented at DeZIM, the German Centre for Integration and Migration Research, at the beginning of 2023. The NaDiRa.panel, which is conducted as part of the National Discrimination and Racism Monitor of DeZIM, is characterized by its specific thematic focus on monitoring racism, which is why suitable response rates of people who experience racism, are of special importance here. Against this background, in the article we test the hypothesis that an individual experience of racism leads to lower response rates (a hypothesis that is strengthened by studies showing that the experience of (structural) racism can lead to a loss of trust in institutions). Furthermore, combining survey research with approaches from research on racism, the article discusses possible solutions for the possible challenge of higher nonresponse rates of people who experience racism. In doing so, the proposed article not only adds to the discourse on unit nonresponse and item nonresponse, but also offers unique insights into a new panel study that stands out for its focus on racism.
Dr Nasir Rajah (Centre for Longitudinal Studies, UCL Social Research Institute, University College London)
Professor Lisa Calderwood (Centre for Longitudinal Studies, UCL Social Research Institute, University College London)
Professor Bianca De Stavola (Population, Policy & Practice Department, UCL Great Ormond Street Institute of Child Health, University College London)
Professor Katie Harron (Population, Policy & Practice Department, UCL Great Ormond Street Institute of Child Health, University College London)
Professor George Ploubidis (Centre for Longitudinal Studies, UCL Social Research Institute, University College London)
Dr Richard Silverwood (Centre for Longitudinal Studies, UCL Social Research Institute, University College London) - Presenting Author
There is growing interest in whether linked administrative data have the potential to aid analyses subject to missing data in cohort studies. Using linked 1958 National Child Development Study (NCDS) and Hospital Episode Statistics (HES) data, we applied a LASSO variable selection approach to identify HES variables which are predictive of non-response at the age 55 sweep of NCDS. We then included these variables as auxiliary variables in multiple imputation (MI) analyses to explore the extent to which they helped restore sample representativeness of the respondents together with the imputed non-respondents in terms of early life variables which were essentially fully observed in NCDS (father’s social class at birth, cognitive ability at age 7) and relative to external population benchmarks (educational qualifications at age 55, marital status at age 55). After application of our approach, we identified ten HES variables that were predictive of non-response at age 55 in NCDS. For example, cohort members who had been treated for adult mental illness were more than 70% more likely to be non-respondents (risk ratio 1.73; 95% confidence interval 1.17, 2.51). Inclusion of these HES variables in MI analyses only helped to restore sample representativeness to a limited extent. Furthermore, there was essentially no additional gain in sample representativeness relative to analyses using only previously identified survey predictors of non-response (i.e. NCDS rather than HES variables). Whilst this finding may not extend to other analyses or NCDS sweeps, it highlights the potential utility of survey variables in the handling of non-response.
Mr Felix Süttmann (German Institute for Economic Research) - Presenting Author
Professor Sabine Zinn (German Institute for Economic Research)
After nearly 40 years and 40 waves, in 2021, the German Socio-Economic Panel (SOEP) changed the survey institute from Kantar Public to infas. In addition to this transition, the COVID-19 pandemic hindered the usual survey mode of SOEP, i.e., computer-assisted personal interviewing (CAPI). Both issues meant substantial challenges and drastic changes to the SOEP wave 2021. First and foremost, a considerably postponed field start, mixed-mode designs on the level of household members, new interviewers, and more telephone interviews than intended. All in all, the changes affected SOEP response rates to such an extent that frequent interventions were required during the field period. Nonetheless, nonresponse increased to previously unobserved levels. Our aim is to quantify and explain this increase.
First, we will present the changes due to the new field institute and interventions during the field period. This is followed by models to detect socio-economic groups of survey members that had the highest risk of nonresponse. We will also try to disentangle effects of the pandemic and the changed survey institute. The analysis differentiates between the mostly German samples and those of refugees and migrants. We see that existing factors compound and identify new ones associated with COVID-19 and the field institute change. As lessons learned, we formulate practical suggestions for survey institute changes.
Mr Johannes Lemcke (Robert Koch-Institut)
Mr Ilter Öztürk (Robert Koch-Institut) - Presenting Author
Mr Daniel Grams (Robert Koch-Institut)
Mr Nugzar Bliadze (Robert Koch-Insitut)
Mr Ronny Kuhnert (Robert Koch-Insitut)
Mr Patrick Schmich (Robert Koch-Institut)
Background
Within the framework of the health monitoring of the Robert Koch Institute (RKI), complex survey and research studies are regularly carried out. These data collections have been implemented as separate or self-contained survey projects. In order to enable ad hoc studies in the future in a timely, resource-efficient and flexible manner this feasibility study was conducted, in which participants in an initial telephone health survey were asked about their willingness to be interviewed again and, if they consented, transferred to a separate online survey platform of the RKI panel for further surveys. After the initial telephone health survey, participants willing to be re-surveyed were invited via email to register in the RKI Panel survey platform and, after registration, were invited to participate in further follow-up surveys in online mode within the survey platform.
Methods
The probability-based sample for the initial telephone recruitment survey was randomly drawn using the telephone selection frame. The study was conducted in winter/spring 2021/2022. To identify the success of piggy-backing approach to recruit a probability-based panel sample with a CATI survey, participation rates and selection effects of participants at each observable selection stage were examined.
Results
39% among the online respondents recruited by telephone (N=2,720) indicated a willingness to be interviewed again for further online surveys in the panel and provided an e-mail address. 14% of those respondents participated in the following online profile data survey. With regard to the selection effects, the results suggest that a sampling biases can be largely attributed to the education status of the participants in the different selection. In this context, it could be seen from the CATI sample that the recruitment process was already characterized by a strong educational bias in the initial sample of the