Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Longitudinal surveys – Special challenges and innovative solutions in panel studies |
|
Convenor | Dr Jutta Von Maurice (NEPS) |
Coordinator 1 | Mrs Joanne Corey (Australian Bureau of Statistics) |
The session will cover the organisation of panel studies, from tracking of respondents and sample review prior to field enumeration, the recruitment and training of Interviewers, through to field logistics, field monitoring, and reporting. The focus is on the particular challenges faced by those running panel studies such as:
- Finding effective and reliable tracking methods to find non-contacts and participants with changed life situations from previous waves prior to field enumeration;
- The value of continuing with participants that have been long-term non-contacts or refusals. For sample management and data collection, there is a lot of effort put into these groups, through tracking and interviewer f2f visits, but also a lot of effort goes into this group through instrument design (roll-forward and catch up questions);
- Developing effective engagement strategies aimed at ensuring the long term commitment of respondents of different age groups;
- Conducting standardised Interviewer training across a large interview panel and also conducting of Interviewer training each wave when a significant amount of the content remains stable, yet the Interviewer panel contains a mix of new and experienced Interviewers;
- Regular field logistics, such as starting the field and taking into account that environment and context could have changed since the last field period;
- Monitoring field progress, taking into account the length of the enumeration period, keeping track of refusal and non-contacts, and managing transitions in life paths such as from kindergarten to school or from primary school to secondary school;
- Reporting during enumeration - how often, what to report on, presentation and usefulness of reports.
Since 2005/2006 more than 3000 preschool and primary school children and their families in two German federal states (Bavaria and Hesse) have been followed up to eight years through their educational careers in the context of the DFG research group BiKS. After six years families in Bavaria had to consent their participation once more by sending back an additional signed consent form via mail due to a policy change. This external shock led to a significant increase of dropouts (~5% vs 31%) as families who did not respond at all (passive non-respondents, N=460) had to be treated as if they had actively refused their participation (active non-respondents, N=432). As we have several years of information about these families who actively or passively dropped out, this situation gives us the rare chance to gain a better understanding of the determinants of refusals of participation in longitudinal studies in general and of determinants of passive non-response in particular. By applying multinomial logistic regression analyses we find that social status, parental education, migration background, and children's performance correlate with non-response as expected and in line with recent studies of non-response, i.e. educated and high-SES families drop out less. Furthermore, passive and active non-respondents do not differ significantly in their social background characteristics. This leads to the conclusion that resources that are spent on activating passive non-respondents could be more efficiently spent on keeping active participants.
Further co-author: Claudia Karwath
This paper presents results of an experiment conducted with mobile phones in Mexico City between July and August of 2011. The study was designed to compare results across time from a three-wave survey conducted in a one-week span. The Total Survey Error approach is used to inform the discussion in this paper; in particular, aspects related to measurement error (sampling, nonresponse and coverage error are discussed elsewhere). In the first wave of the survey, one hundred cases were interviewed face to face, and it was used as baseline. For the second and third wave the same respondents were contacted, and those who agreed to participate were given a credited mobile phone to increase the likelihood of securing an interview with them. In the baseline survey (first wave) and in the next two waves (conducted on mobile phones), public opinion questions were asked. Questions related to party identification, approval of public officials, retrospective and prospective assessment of the economic assessment, vote preference, perception of public safety, and demographic variables, were measured across time. These questions were chosen to be included in the three-wave panel because of their relevance in public opinion studies. In this paper, we will study how the distribution of each variable changed over time. The purpose of the study is to explore differences in measurement error related to mode of data collection (mobile phones versus face to face surveys) and characteristics of interviewers.
A time consuming part of the organization of empirical field research is the arrangement of appointments with the participants, especially if individuals are the test subject. Often a part of the sample is hard to reach via phone. Within the scope of the BiKS-project an online appointment system was used to coordinate the annual family visits. The participants were allowed to arrange their appointment according to their own schedule via an individual code online, which they had previously received. Around one quarter (n=63) of our sample of 242 families, who since 2005 had been used to arranging their appointments by phone, now used the online appointment alternative instead. To evaluate its usability for field research and the strength of possible sample bias, the participants were asked to answer some questions for further information. We can show that personal attitudes are more significant in explaining usage of the e-appointment system than socio-demographic characteristics. Overall, the e-appointment system does not increase sample selectivity bias, so that particularly with bigger samples it is an instrument that is as qualified as phone based arrangement systems for making contact with people who are hard to reach, while costing very little time expenditure for the researcher.
Further Co-Authors: Dr. Monja Schmitt, Mr. Daniel Mann
Although time-diaries are considered the most valuable and detailed method for (large-scale) studies of daily life, they are also highly expensive to perform. In the paper "MOTUS: A new tool for time diary data collection" we demonstrated that MOTUS reduces these research costs and that the solutions for this cost-reduction additionally result in higher quality time-diary data. In this contribution focuses on another additional benefit of MOTUS, being its modularity.
As we will demonstrate, the benefits of this modularity are twofold: 1) it allows low-cost start-up of new research, 2) it allows topic-specific research within the full context of daily life. Once developed, MOTUS' infrastructure consists of scripts for questionnaires, activity-based diaries, and a respondent management and monitoring system, implying that starting up a new research simply means providing input for questionnaires, adjusting activity-lists for diary registration, and uploading respondent contact information. Moreover, this makes it easier to adjust questionnaires or activity lists to international standards (e.g. in casu EUROSTAT-guidelines for harmonized time-use surveys).
The advantage of time-diaries inquiring daily life in its full context might in sometimes be considered also a disadvantage. Some activities are hardly captured in time-diary data (e.g. media-usage, transportations). As we will show, MOTUS' modularity allows the creation of topic-specific modules that can assign additional questions to certain activities to inquire them more in depth without designing, for example, a special transportation survey but loosing this daily context.