Methodological Challenges in Longitudinal Surveys 2 |
|
Session Organiser | Dr Hans Walter Steinhauer (Leibniz Institute for Educational Trajectories) |
Time | Thursday 18th July, 14:00 - 15:30 |
Room | D19 |
This session includes papers that address the challenges of longitudinal survey research, in particular, such as the problems of attrition and panel conditioning.
Keywords: panel surveys, nonresponse, attrition, incentives
Professor Klaus Birkelbach (Universitat Duisburg-Essen) - Presenting Author
Mr Christian Sondergeld (Universität Duisburg-Essen)
We will present experiences and results from a longitudinal study of former German high-school students (“Gymnasiasten”) who have been interviewed five times between ages 15 and 65 and discuss strategies used to reduce panel mortality.
In 1969 at the age of 15 n=3240 10th grade high-school students from North Rhine-Westphalia have been interviewed on their social origin, school attainment and scholastic plans. In three follow-ups 1985 (age 30, n=1987), 1996/97 (age 43, n=1596) and 2010 (age 56, n=1301) this cohort has been resurveyed about their educational, occupational and private life courses retrospectively together with questions concerning biographical, political and religious attitudes so far, and their future life plans. April 2019 the fieldwork of the fourth resurvey – the former students of the CHiSP are now about age 65 and reached the threshold to retirement – will be finished.
As a panel, covering the entire educational and occupational career, the CHiSP allows insights into the mechanisms of the life course of a cohort of educationally privileged former students grown up during the educational expansion in Germany.
Whereas the fieldwork of the first follow-up was done as personal interviews by a professional institute, the follow-ups have been carried out in the CATI-Lab of the University of Cologne. It turned out to be very advantageous to hold everything from address localization over designing and programming of the questionnaire up to accomplishment of the interviews in the own CATI-Lab together in the hands of the project group. The presentation will give an overview over the data material of the CHiSP and discusses panel attrition based on information from previous waves.
Mrs Aigul Mavletova (National Research University Higher School of Economics) - Presenting Author
Mr Evgeniy Terentev (National Research University Higher School of Economics)
Mr Alexandru Cernat (Social Statistics, University of Manchester)
Computer-assisted personal interviewing (CAPI) is widely used in both cross-sectional and longitudinal surveys. However, there is limited evidence about the effect of using CAPI on data quality in longitudinal household surveys compared to PAPI (paper-based interviewing). The paper presents the results of the experimental study which compared data quality in CAPI and PAPI in the 26-th wave in a nationally representative longitudinal household study in Russia - the RLMS-HSE (the Russia Longitudinal Monitoring Survey - Higher School of Economics) in 2017-2018. To our knowledge, only two household longitudinal studies worldwide - German Socio-Economic Panel (SOEP) and Australian Household Income and Labour Dynamics Panel (HILDA) - conducted experimental studies to measure the effect of using CAPI on data quality and published the results. The UKHLS shifted from PAPI to CAPI survey mode in 1999 without running any experimental studies. The results of SOEP and HILDA showed that the shift from PAPI to CAPI varies depending on the specific features of the individual and household questionnaires in the panel, and the way in which the CAPI is implemented.
The overall sample of the experimental study in the RLMS-HSE panel is 2213 questionnaires. First, CAPI produced fewer non-substantive answers such as “don’t know” and “refuse to answer” when these were not presented on the screen. Second, CAPI does not lead to any consistent difference compared to PAPI regarding the length of open-ended questions and social desirability bias. However, higher item nonresponse rates and longer completion times were found in CAPI although these were mainly due to programming errors. Finally, we found that both the characteristics of interviewers and respondents as well as interviewers skills impact completion times in CAPI.
Mrs Tanja Burgard (ZPID - Leibniz Institute for Psychology Information) - Presenting Author
Professor Michael Bosnjak (ZPID - Leibniz Institute for Psychology Information)
Dr Nadine Kasten (University of Trier)
Relevance & Research Question:
Panel Conditioning is a learning effect, that can endanger the validity of results from panel studies. It describes actual changes in attitudes or behaviors or the way they are reported due to participation in former survey waves.
Panel conditioning effects are heterogeneous and can manifest in different ways. For example, experience in survey participation may lead to more frequent negative answering of filter questions to reduce response burden. Other possible effects are changes in knowledge or reduction of social desirable answering. As these effects are too diverse to determine one overall effect, moderator and subgroup analyses are necessary for investigation. Corresponding moderating influences such as the design and timing of the surveys or the year of data collection are examined, too.
Methods & Data:
To be included in the meta-analysis, articles had to report (quasi-)experiments, involving a control group of fresh respondents or actuary information from a registry and at least one group of conditioned respondents. Both groups had to be exposed to identical survey questions to enable between-group comparisons of quantitative outcomes. 44 reports met these criteria.
Within the 25 reports coded up to now, 115 single studies were defined. These studies contain 346 effect sizes in total. The effect sizes are nested within the studies and to account for this dependency, three-level mixed-effects models will be used.
Results:
First analyses showed, that panel conditioning effects are more pronounced for knowledge questions than for other types of questions. Further moderating effects were weak and mostly not significant in the current data set, which will be extended soon.
Added Value:
The differentiation of different types of conditioning enables conclusions concerning the effects of PC on data quality. Moreover, recommendations on the timing and design of panel surveys, as well as an appropriate operationalization of repeated items are the aim of the meta-analysis.
Dr Hans Walter Steinhauer (Leibniz Institute for Educational Trajectories) - Presenting Author
Dr Sabine Zinn (Leibniz Institute for Educational Trajectories)
We study the use of longitudinal survey weights in longitudinal multilevel modelling by means of simulation. Using an application from empirical educational research we analyse the composition and the competence development of a synthetic population of grade 7 students over a period of three years. Resampling common sampling procedures of student surveys, schools and students are sampled from this population by probability proportional to size sampling of school at the first stage and sampling classes at the second stage. Initial nonresponse is imposed at the school and the student level. The participating schools and students form the panel. Nonresponse is assumed to occur over time at the student level, only. To compensate for nonresponse in statistical analyses, we compute longitudinal weights as the product of a nonresponse adjusted design weight and the inverse of estimated wave-specific response propensities. The performance of longitudinal weights is compared for different missing data mechanisms as well as for balanced and unbalanced panel data. We compare weighted analyses for estimates of means and correlation as well as for a longitudinal multilevel model on competence growth. For means and correlations we find weighted estimates using unbalanced panel data to be least biased compared to using balanced data or unweighted estimation. In contrast, we find weighted analyses of the growth curve model to bias estimated parameters related to time notably.