Panel Conditioning in Longitudinal Surveys |
|
Chair | Dr Bella Struminskaya (GESIS - Leibniz Institute for the Social Sciences ) |
Coordinator 1 | Dr Vera Toepoel (Utrecht University) |
Coordinator 2 | Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences) |
The ELIPSS Panel (Étude longitudinale par internet pour les sciences sociales) is a French probability-based web panel which is dedicated to surveys in social sciences. Its target population is the French population aged 18-79. Each panel member receives a touchscreen tablet equipped with a 3G/4G connection in order to participate in the self-administrated questionnaires. The monthly surveys are carried out from a specific application pre-installed on the tablet.
From 2012 to 2015, the ELIPSS panel was in its pilot stage and is now on the main panel phase since 2016. The pilot was composed of more than 1000 panel members in 2012, 800 among them were still present in 2016. To complete the pilot sample, an offline recruitment of 2500 new panel members was carried out during the first semester of 2016.
The questionnaires are designed by research teams after their survey project has been selected by the ELIPSS scientific and technical board. The surveys can be longitudinal or cross-sectional projects. In addition to these surveys, the ELIPSS team conducts each year one core survey to collect sociodemographic data and one survey on the digital practices and the equipment of the panel members.
ELIPSS offers a unique opportunity to assess the panel conditioning.
Firstly we can study of the evolution in answering of the panel members of the pilot on various topics such as political attitudes, leisure activities and cultural practices, report of socio-demographic characteristics and digital practices. In particular, we will study political attitudes batteries with randomly ordered items at several waves. We will also focus on the internet usage questions since one of the specific aspects of ELIPSS is to provide panel members with a touchscreen tablet connected to the internet. Exploratory results show that non-users of the internet at the entry in the panel became regular users one year after.
Secondly, we will be able to compare the pilot sample attitudes and behaviours, which are experienced respondents, with the newly recruited members of the panel. Several waves of the survey on political attitudes and the core survey of ELIPSS are administered at the same time to the two samples in 2016 as well as in early 2017.
This paper will address the issue of panel conditioning based on empirical evidences from longitudinal self-administered surveys conducted during four years.
If answers of panel respondents change due to the repeated measurements (i.e., panel conditioning), it can bias survey estimates and introduce bias to models based on the collected data. Studies to date provide, however, inconsistent empirical evidence on the presence and magnitude of panel conditioning. One reason is that related studies are often a by-product of surveys which are not designed with a focus on panel conditioning. In attitudinal and knowledge questions, panel conditioning manifests itself by respondents becoming more opinionated and their opinions becoming more stable over time. However, these findings rely on studies with nonexperimental designs and questionnaire learning effects of panel conditioning may be confounded with other factors. In order to gain insight into the presence and the magnitude of panel conditioning effects, we conducted a randomized experiment in an online access panel in Germany (N=1100). Four experimental groups varied in their exposure to the target questions over four waves. We compare answers to the attitudinal and knowledge questions between the groups controlling for the number of times respondent received certain questionnaire content, respondents’ characteristics that were found to influence panel conditioning (e.g., need for cognition), response behavior (e.g., satisficing), and survey evaluation items. In our presentation, additionally to providing empirical evidence on the presence and magnitude of panel conditioning effects, we develop recommendations about optimal study designs for studies on panel conditioning.
Much existing research into panel conditioning focuses on its impact as a source of error across whole samples. Examining data from a methodological experiment mounted upon the first six waves of the UK Household Longitudinal Study's Innovation Panel, this paper explores possible differential rates of conditioning between different sub-groups of the sample. Established measures of conditioning effects including comparisons of means, variances, and average inter-item correlations of items used for constructing a scale, are used to uncover evidence of differential conditioning.
Four main findings came out of the research in this project. Of the respondent characteristics explored gender, age, level of education and region, two, age and level of education, showed clear signs of having an effect upon rates of conditioning, with conditioning seeming to have a diminished effect in respondents who are older, or who have higher levels of education.
The second finding was that whilst there was no clear geographic pattern in regionally different conditioning effects, there was some evidence to suggest that conditioning had a leveling effect among those regions which were furthest from the mean of the sample as a whole.
By analysing data across the first six waves it was also possible to see how conditioning had an effect over the longer term, and to see whether any conditioning effects observed early in the life-course of a longitudinal study dropped out in subsequent waves of that survey. This longer term analysis produced two further findings for this paper. The first is that where there was a conditioning effect, at wave two for those who were asked the studied measures at every wave, this effect diminished over time, and for most groups had disappeared by waves four or six.
Interestingly, there was also evidence of a gradual conditioning effect manifesting itself in the comparison group of the sample, who were unconditioned at wave two, and partially conditioned at waves four and six. This suggests that conditioning can have a cumulative effect across longer periods of time than suggested in previous research. This also has potential implications for the systematic rotation of modules in and out of waves of a panel study.
Panel survey participation can bring about unintended changes in respondents’ reporting and/or behavior. Most studies of such panel conditioning effects have focused on changes in reporting, while studies analyzing changes in behavior are – due to special data requirements – rare.
Using administrative data linked to a large panel survey, we analyze how repeated survey participation changes respondents’ labor market behavior. In previous analyses of the same data, we used propensity-score techniques to estimate the causal effect of participation in three waves of the survey on the take-up of federal labor market programs. Results indicated that panel survey participation increases respondents’ take-up of these measures. However, re-analyses with an instrumental variable approach suggest that these results are not correct.
To estimate the effect of repeated participation in the survey with an instrumental variable approach, we selected a second sample from administrative records of people who were eligible for selection into the three wave survey but were not selected. Thus, our data consists of two random subsamples – one selected for the survey, the other one not.
We use an indicator for (random) selection into the survey as an instrument for actual participation in the survey. This approach allows us to eliminate confounding effects due to nonresponse and attrition from the conditioning effect we wish to estimate.
Results based on this estimator suggest that panel survey participation leads to a decrease in program participation. The difference in results between the two methods is likely due to unmet assumptions of the propensity score techniques. While propensity score techniques require that there is no selection on unobservables, instrumental variables do not rely on this untestable assumption.
Our results add to the sparse literature on changes-in-behavior panel conditioning. Furthermore, we raise attention to the methodological challenges in the analysis of panel conditioning effects.
Panel conditioning refers to the bias arises when collecting repeated measures from the same group of respondents at different points in time. There is a small handful of research examining the frequency and effects of panel conditioning (e.g.,Halpern-Manners, Warren, and Torche, 2014). The dependent variables in these studies are often changes in attitudes and behaviors over time. However, a more interesting question is whether panel conditioning affects data quality over time. For example, will the respondents’ repeated exposure to scale items increase reliability and reduce the straightlining?
Comparisons between answers given by respondents who first participated in the 1st wave of study (Cohort 1) with to those given by respondents who first participated in the 2nd wave (Cohort 2) confound the effects of panel conditioning with panel attrition. And it’s often not realistic to assume that attrition happens randomly. Given this, different analytical approaches have been proposed to examine the effects of panel conditioning. The approach Halpern-Manners and his colleagues (2014) used was to select respondents with the same underlying propensity to stay in the sample, for example, select respondents from both cohorts who participated in at least the first two waves of the survey and then compare their answers in the 2nd wave. The other approach is to look at the answers given by the same group of respondents across waves, for example, compare answers to the same questions given by respondents across the 1st, 2nd, and 3rd wave. The benefit of this approach is that the effects of panel attrition are eliminated entirely from the analysis.
In this paper, we will use both analytical approaches to examine the effects of panel conditioning on data quality in two panel surveys—one done with GfK’s KnowledgePanel and the other is the General Social Survey (GSS). The Knowledge Panel study examined people’s attitudes and beliefs about the terrorist threat to the United States and counterterrorism strategies. There were four waves of data collection. With GSS, we looked at the 2006, 2008, and 2010 panel data. In this paper, we will use both analytical approaches and will look at various measures of data quality, including the proportions of item nonresponse, straightlining and correct answers to knowledge items, timing, and scale reliability and the scale measurement and structural invariance. We will compare the findings of the two approaches and evaluate the effects of panel conditioning on data quality.