All time references are in CEST
Mixing Modes in Longitudinal Surveys |
|
Session Organisers | Professor Mark Trappmann (Institute for Employment Research, University of Bamberg) Dr Mary Beth Ofstedal (Institute for Social Research, University of Michigan) |
Time | Tuesday 18 July, 14:00 - 15:30 |
Room | U6-22 |
The Covid-19 pandemic has forced many panel and cohort surveys to replace personal interviews by telephone or self-administered modes and thereby accelerated a trend of mixing modes in longitudinal surveys. While introducing new data collection modes helped prevent attrition or even the loss of entire survey waves during the pandemic, it also created new challenges for longitudinal surveys related to mode effects on survey measurement.
Many of the challenges presented by mode effects and the methodological tools for investigating and adjusting them differ between longitudinal and cross-sectional surveys. On the one hand, in longitudinal surveys the potential for harm is substantial. Even small mode-effects can dramatically affect estimates of change if the trait under investigation is relatively stable over time. On the other hand, longitudinal surveys allow exploiting within-subject variation and thus the application of more stringent methods to separate (self-)selection into mode from mode effects on measurement.
We invite submissions of research that investigate mode effects in a longitudinal setting. This may include mode experiments, analyses that separate selection effects from measurement effects, or approaches to separate mode effects from time trends and particularly from effects of the pandemic. We also invite contributions that address the impact of mode effects for longitudinal estimates and that offer solutions for communicating to users the importance of recognizing the potential for mode effects and how to deal with them in their research.
Keywords: data collection mode, mixed-mode, longitudinal surveys, mode effects
Dr Susanne Kohaut (IAB)
Dr Iris Möller (IAB) - Presenting Author
The IAB-Establishment Panel is the most comprehensive establishment survey in Germany with 15.000 firms participating every year. Until 2018 the interviews were conducted face-to-face with paper and pencil (PAPI) with the option of self-completion by leaving the paper questionnaire behind. In 2018 a computer aided instrument was introduced (CAWI/CAPI) in an experiment. In 2019 a substantial number of the refreshment sample was switch to CAWI. But we never used a mixed-mode design for the panel firms with a computer-aided instrument.
In 2020 during the first lock-down due to the pandemic we did not dare planning face-to-face interviews. The refreshment and panel sample were switched to self-administered or telephone interviews without face-to-face contacts. To avoid dramatic losses in responses we developed a comprehensive plan to contact the firms using a concurrent approach that offered multiple choices at the same time. The panel firms were informed in advance of the changes and contacted by letter with a link to the web questionnaire and a paper questionnaire for self-completion. All non-respondents were contacted by an interviewer over telephone after some time. A similar procedure was applied to the refreshment sample.
Nonetheless, the response rates of the refreshment and the panel sample dropped considerably in comparison to pre-pandemic-years. In this contribution we will analyse the development of the response rates over the last years and try to disentangle different reasons for the low response rates. We are especially interested in the consequences for the panel sample. In a first step we distinguish between non-contacts and refusals. We also try to find out whether firms that used the web questionnaire in the previous year might react differently to firms that where so far only used to
Dr Benjamin Domingue (Stanford Graduate School of Education)
Mr Ryan McCammon (University of Michigan)
Dr Brady West (University of Michigan)
Dr Kenneth Langa (University of Michigan)
Dr David Weir (University of Michigan)
Dr Jessica Faul (University of Michigan) - Presenting Author
As populations age, there is interest in assessing health conditions associated with age and longevity, such as age-related decline in cognitive functioning. As a result, there is an increased focus on measuring cognitive functioning in surveys of older populations. A move towards survey measurement via the web (as opposed to phone or in-person) is cost effective but challenging as it may induce bias in cognitive measures. Compounding this is that mode of survey administration is often not assigned randomly making inter-group comparison more difficult. We examine these issues using a novel experiment embedded within the Health and Retirement Study (HRS). The HRS, a US-based cohort of people over 50, has measured cognition since its inception using both in-person and telephone modes. We deploy techniques from item response theory (IRT) and differential item functioning (DIF) to estimate the difference in cognitive functioning between web and phone respondents in 2018 based on longitudinal cognition data collected prior to 2018. Second, we estimate the overall effect of taking the survey via the web as compared to the phone. Third, we examine item-level variation in the magnitude of the mode effect and suggest possible methods for adjustment to support longitudinal consistency. We find evidence of an increase in scores for HRS respondents who are randomly assigned to the web-based mode of data collection in 2018. Web-based respondents score higher in 2018 than do phone-based respondents, and they show much larger gains relative to 2016 performance and subsequently larger declines in 2020. The bias in favor of web-based responding is observed across all cognitive item types, but most pronounced for the serial 7 and items on financial literacy. Implications for both use of HRS data and future survey work on cognition are discussed.
Professor Heather Kitada Smalley (Willamette University) - Presenting Author
Professor Sarah Emerson (Oregon State University)
In this era of public opinion research where mixed-mode studies dominate the survey landscape, questions about the presence of mode effect have led to the development of methodology for mode adjustments. These proposed adjustments typically make parametric assumptions about model mode effect, namely design based additive/linear versus odds-multiplicative/logistic functional forms. It has been shown in our previous research that functional form choice is not trivial and may result in erroneous inference when using adjusted estimates, depending on the magnitude of the underlying trend or change in the reference response mode. Therefore, the goal of this research is to explore and develop methodology for hypothesis testing to aid survey researchers in choosing modeling techniques for mode effect adjustments based on the data. It has been shown that previously proposed methods of goodness-of-fit tests are not well calibrated for complex sampling schemes or violations of assumptions of independent, identically distributed data. In our proposed goodness-of-fit tests, we will address the construction of model residuals, creation of the test statistic, and the approximation of the reference distribution (empirical/bootstrap versus theoretic). We compare candidate models (linear versus logistic) for mode effect adjustment in longitudinal studies via two approaches, a head to head comparison and multiple separate comparisons, to address overall model fit. In the latter case, we will address the robustness of the procedure and provide insight into further steps that can be taken when each hypothesis is rejected.
Ms Eva Leissou (University of Michigan - Survey Research Center) - Presenting Author
Mr Paul Burton (University of Michigan - Survey Research Center)
Mr Andrew Hupp (University of Michigan - Survey Research Center)
Dr Brady West (University of Michigan - Survey Research Center)
Every six years, the Health and Retirement Study adds a new age cohort to its existing longitudinal panel. Given the narrow recruitment criteria, many sampled households screen ineligible, which requires a great deal of effort and a lengthy field period. While previous screening efforts have been primarily in-person, we invited a portion of the sample in 2022 to complete the screening questionnaire via the web to address declining response rates and increasing costs. Sampled households were randomly assigned to one of two protocols: in-person first, or web first with in-person follow-up for non-responding cases. Households assigned to in-person first were mailed a prenotification letter followed by an interviewer visit. Households assigned to web first were mailed an invitation letter containing the URL and a QR code, a brochure, and a $2 bill. Two weeks later, non-responding households were sent a reminder letter with a URL, a paper questionnaire and business reply envelope. Four weeks later, any non-responding cases were sent to the field for follow-up by an interviewer. As part of the invitation mailing, we tested two different envelope types to see which was more effective. Households were randomly assigned to either receive an envelope with the $2 visible through a small window, or a maize and blue envelope (with no cash visible). In this presentation, we will report on 1) the effectiveness of the web option compared to the traditional in-person option (in terms of response rates and effort), and 2) cost and response rate differences between the visible cash and maize and blue envelopes. Preliminary results suggest that the visible cash envelope is most effective during the web protocol, with higher completion rates than the maize and blue envelope, but we see no difference in the effort required.
Dr Narayan Sastry (University of Michigan) - Presenting Author
Dr Katherine McGonagle (University of Michigan)
Assessment of mode effects on interview responses is important for longitudinal studies that switch from interviewer-administered to self-administered and mixed modes. The gold standard for such an assessment is an experimental design in which respondents are randomized to mode. We implemented such a design in the 2019 Transition into Adulthood Supplement (TAS) to the US Panel Study of Income Dynamics as it shifted from telephone only in 2017 to a fully mixed mode design (online and telephone) in 2019.
TAS is a national longitudinal survey that interviews young adults biennially, covering a wide variety of topics including those potentially sensitive to mode effects such as alcohol and drug use, sexual behavior, and mental health. TAS has collected eight waves of data since its launch in 2005, with response rates of 87% to 92%.
The eligible TAS-2019 sample comprised 2,964 young adults. We randomly assigned 80% to a mixed mode data collection protocol that offered an online mode first, followed by telephone. The remaining 20%) was randomized to the telephone only mode with no option for completing the survey online.
Random assignment allowed us to undertake intent-to-treat and treatment-on-the-treated assessments of mode assignment and mode of completion effects on interview responses. We drew on the panel design of TAS to conduct a difference-in-differences analysis of longitudinal changes in interview responses due to interview mode for panel respondents (N=2,121) who had also participated in TAS in 2017 when the survey was telephone only.
This presentation will describe results from the intent-to-treat, treatment-on-the-treated, and difference-in-difference analysis of item nonresponse and of substantive response patterns by mode. We will investigate specific hypotheses for potential mode effects, including social desirability bias, satisficing, and questionnaire response