Measurement error in large scale suveys and panels |
|
Session Organiser |
Dr Shelley Feuer (census) |
Time | Friday 16 July, 15:00 - 16:30 |
Mr Carlos Poses (RECSM - Universitat Pompeu Fabra) - Presenting Author
Mrs Hannah Schwarz (RECSM - Universitat Pompeu Fabra)
Mrs Wiebke Weber (RECSM - Universitat Pompeu Fabra)
Different response scales used in surveys lead to difference in measurement quality and
measurement errors. Estimating the size of these measurement errors for different
questions and scales is thus crucial both to reduce them in questionnaire design and to
later account for them in substantive analyses. In this paper, we present the results of a
Split-Ballot Multitrait-Multimethod (SB-MTMM) experiment dealing with questions
about the topic “evaluations of democracy” implemented in the Round 9 (2018) of the
European Social Survey. It was designed to estimate the measurement quality (i.e.,
alternatively seen as the associated counterpart of measurement errors) of questions
using three response scales: an 11-point ‘apply’ scale, an 11-point item-specific scale
and a 10-point item-specific scale. The average measurement quality across country-
language groups is highest for the 10-point, item-specific scale, followed by the 11-
point, item-specific scale and then by the 11-point, ‘apply’ scale. Additionally, we
present the results for each of the 29 country-language groups analyzed as well as the
estimates of reliability and validity for each response scale. We also discuss the
theoretical reasons for the differences in quality and reflect about the practical
implications of our findings.
Ms Hannah Schwarz (Universitat Pompeu Fabra) - Presenting Author
Dr Wiebke Weber (Universitat Pompeu Fabra)
Ms Isabella Minderop (GESIS - Leibniz Institute for the Social Sciences)
Dr Bernd Weiß (GESIS - Leibniz Institute for the Social Sciences)
In times of decreasing response rates, monetary incentives are increasingly used to motivate individuals to participate in surveys. Receiving an incentive can affect respondents’ motivation to take a survey and, as a consequence, their survey taking behaviour. On the one hand, the resulting extrinsic motivation might undermine intrinsic motivation thus leading respondents to invest less effort into answering a survey. On the other hand, monetary incentives could make respondents more eager to invest effort into answering a survey, as they feel they are compensated for doing so. This study aims to assess whether there are differences in measurement quality between respondents who are motivated to take surveys due to the received incentive and respondents for who this is not a reason for participation. We implemented MTMM (Multitrait-Multimethod) experiments in the probability-based GESIS Panel in Germany to be able to estimate the measurement quality of various questions asked to panellists. Furthermore, by coding panellists’ open answers to a question about their reasons for participation, we can distinguish panellists who are motivated by the incentive from those who are not. We then analyse the MTMM experiments for these two groups separately and compare the resulting measurement quality estimates.
Mrs Fabienne Kraemer (GESIS - Leibniz Institute for the Social Sciences) - Presenting Author
Dr Henning Silber (GESIS - Leibniz Institute for the Social Sciences)
Professor Bella Struminskaya (Utrecht University)
Professor Michael Bosnjak (ZPID - Leibniz Institute for Psychology)
Mrs Joanna Koßmann (ZPID - Leibniz Institute for Psychology)
Dr Bernd Weiß (GESIS - Leibniz Institute for the Social Sciences)
Longitudinal surveys allow researchers to study stability and change over time and to analyze causal relationships. However, panel studies also hold methodological drawbacks, such as the threat of panel conditioning effects (PCE). PCE are artificial changes in respondents’ attitudes, behavior, and knowledge or in the way these phenomena are reported, and these changes are solely caused by prior survey participation. PCE poses a major threat to the validity of research based on longitudinal data. However, for data quality, PCE can be both beneficial and disadvantageous. Therefore, a closer analysis of the existence and consequences of PCE on data quality is crucial.
In the present research, we will investigate the existence of PCE and their consequences on data quality within the GESIS Panel - a probability-based mixed-mode access panel, administered quarterly to a random sample of the German-speaking population aged 18+ years. To account for panel attrition, two refreshment samples were drawn in 2016 and 2018. In order to identify PCE, we conduct between-person comparisons across the different cohorts (i.e., using the refreshment samples), which represent respondents with various levels of experience. One the one hand, we hypothesize that more experienced respondents show shorter response latencies and a lower prevalence of don’t-know answers due to previous reflection and familiarity with the answering process. And on the other hand, experienced respondents are expected to show more satisficing (i.e., straightlining, speeding, and misreporting). Finally, becoming familiar with the survey process might decrease the extent of socially desirable responding of experienced respondents.
First results provide evidence for the existence of PCE within the GESIS Panel. The findings indicate a beneficial as well as disadvantageous impact of panel conditioning on the quality of panelists’ responses. Beneficial conditioning entails a significantly lower prevalence of “don’t know” – responses among experienced sample members as well as significantly shorter response latencies of experienced respondents. On the other hand, experienced respondents show a significantly higher prevalence of speeding, which provides evidence for disadvantageous conditioning. However, there was no evidence for further disadvantageous conditioning, such as the extent of straightlining and misreporting when answering filter questions. Likewise, there was no evidence for differences in the extent of socially desirable responding.
In summary, our research shows that prior survey participation has positive as well as negative effects on data quality within the GESIS Panel. A higher familiarity with the survey process and survey questions can turn out to be beneficial in terms of time taken to complete the questionnaire and the prevalence of “don’t-know”-responses. However, the higher prevalence of speeding among more experienced respondents indicates that prior survey participation leads to a less accurate response behavior affecting data quality negatively.
In conclusion, PCE can negatively affect the validity of longitudinal data. PCE can undermine the results of a multitude of analyses that are based on the respective panel data. Our findings make a contribution to the investigation of PCE on response quality and may encourage similar analyses in other countries.
Dr Patrick Lazarevic (Vienna Institute of Demography) - Presenting Author
Background: Health is a fundamental aspect of many scientific disciplines and its definition and measurement is the analytical core of many empirical studies. Comprehensive measures of health, however, are typically precluded in survey research due to financial and temporal restrictions. On the other hand, self-rated health (SRH) as a single indicator of generic health exhibits a lack of measurement invariance by age and is biased due to non-health influences. The three-item Minimum European Health Module (MEHM) complements SRH with global questions on chronic health conditions and health-related activity limitations and can thus be seen as a compromise between these two approaches.
Methods: Using data from the German Ageing Survey (2008 \& 2014; n = 12,037), we investigated the feasibility to combine the MEHM into a generic health indicator and judged its utility in comparison to SRH as a benchmark. Additionally, we explored the option of an extended version of the MEHM by adding information on multimorbidity and the presence and intensity of chronic pain.
Results: Our analyses showed that both versions of the MEHM had a good internal consistency and each represented a single latent variable that can be computed using generalized structural equation modeling. The utility of these indicators showed great promise as it significantly reduced age-specific reporting behavior and some non-health biases that affect SRH.
Outlook: To further attenuate systematic response behavior, approach can be extended by priming the meaning of health in SRH and the use of MIMIC-modeling.
Dr J Kappelhof (The Netherlands Institute for Social Research/SCP) - Presenting Author
Mr R Briceno-Rosas (GESIS)
Dr D. Ackermann-Piek (GESIS)
Dr M Dousak (University of Ljubljana)
Dr J. van de Maat (The Netherlands Institute for Social Research)
Dr P. Flore (The Netherlands Institute for Social Research)
Interviewers can affect both the measurement and the representation dimension of the Total Survey Error framework (TSE, Groves et al.2009). However, undesirable interviewer behavior (UIB) can not only affect the accuracy of estimates, but UIB can also affect the comparability of estimates in case of multinational, multiregional or multicultural (3MC) surveys. Therefore, reducing UIB becomes an even more urgent area of attention for a interviewer-assisted survey in a 3MC context, especially large scale face-to-face surveys employing many interviewers at the same time
The European Social Survey (ESS) aims to measuring attitudes, beliefs and behavior patterns in a changing world, as well as to improving survey methodology in cross-national studies. Since 2001, it has been conducted as a bi-annual cross-national face-to-face survey and the 10th Round is currently ongoing. In order to keep the potential for UIB and its adverse effects on data quality to a minimum, the ESS has developed a new work package aimed to tackle this issue using a holistic approach with respect to interviewer behaviour and their involvement in the survey life cycle. It aims to provide quality assurance (QAssu) during the preparatory phase, quality control (QC) during the data collection phase and quality assessment (QAsess) at the post-data collection phase of the survey life cycle. This approach should allow the ESS to prevent, detect and assess interviewer related issues affecting the ESS data quality. Furthermore, by developing a WP on interviewer behaviour, being transparent about what we do to tackle UIB, and documenting the outcomes we aim to further increase trust in the ESS data among the wider user community.
We present the ESS approach to minimizing UIB and describe in detail the challenges we face in order to be able to do effective QAssu, QC and QAsess in a timely and comparable way as well as present some results. We also include some of the specific challenges encountered when implementing a face-to-face survey midst of the current pandemic and reflect on the role of the interviewer for the survey landscape.