Assessing the Quality of Survey Data 5 |
|
Session Organiser |
Professor Jörg Blasius (University of Bonn) |
Time | Thursday 18th July, 09:00 - 10:30 |
Room | D11 |
This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.
Keywords: Quality of data, task simplification, response styles, satisficing
Mr Miha Matjašič (Research assistant) - Presenting Author
Professor Vasja Vehovar (Professor)
A speeder is a unit that completed the web survey so (unacceptably) fast that should be eliminated. However, eliminating these units requires unambiguously identifying them. We argue that the basic criteria here should be the response quality and not the speeding itself. Of course, implementing response quality criteria, instead of mere technical detection related to some external benchmark for response time values, is extremely difficult. There is, of course, a clear evidence that the extreme speeders do exhibit satisficing behaviour and other aspects of low response quality, but there also exist speeders with acceptable response quality. To address the problem we thus need to start with defining unacceptable response quality instead of defining the unacceptable response speed. We conducted a meta-study, where responses from past web surveys, run on an open source platform were included in the analysis. Our focus was on available paradata (item nonresponse, straight-lining, responses to open-ended questions and other indicators), which provide the basis to identify units with low response quality. The results confirmed our hypothesis that criteria for eliminating speeders should incorporate also the aspect of response quality and not only the response time itself. The additional incorporation of response quality into the speeder elimination criteria also shows that these criteria differ for different surveys. It is, however, true that paradata still offer relatively weak insight into response quality, so it is recommended to expand the web questionnaires by including some standard quality control and evaluation questions.
Ms Felicitas Mittereder (University of Michigan) - Presenting Author
With increasing use of the internet for social research, Web surveys have become one of the most important and efficient tools for collecting survey data. One of the biggest threats to data quality in Web surveys is breakoff, which we see in this mode of data collection much more prominently than in any other mode. Given the (already) lower response rates in web surveys compared to more traditional data collection modes, it is crucial to keep as many diverse respondents in a given web survey as possible and prevent breakoff bias, maintaining high data quality and producing accurate survey estimates.
We fitted a dynamic survival model to data from a real web survey to predict the likelihood of breaking off at both the respondent and page levels. This model makes use of the survey data, along with rich paradata and accessible administrative information from the sampling frame.
After we evaluated the quality of predictions based on the model, we applied the model as part of a randomized experiment designed to reduce breakoff in the same on-going online survey on sustainability conducted by the Institute for Social Research at the University of Michigan. We used the model to predict page-level breakoff risks in a live fashion while respondents were taking the Web survey. Respondents in the treatment group saw an intervention message once their risk of breaking off passed a certain threshold, while respondents in the control group had the standard collection procedure.
Our analyses show that female respondents and students reacted positively on intervention messages and broke off at lower rates when assigned to the treatment group. Additionally, breakoff respondents within the treatment group answered more survey questions than untreated breakoff respondents.
Mr Niklas Jungermann (Kassel University) - Presenting Author
Dr Bettina Langfeldt (Kassel University)
Ms Ulrike Schwabe (DZHW Hannover)
Analyzing the determinants of systematic unit nonresponse as well as their interplay is one major challenge in research on survey data quality. In general, the decision on survey participation depends on situational influences (such as the experiences made in prior panel waves) as well as individual dispositions (such as respondents’ generalized attitudes towards surveys). Theoretically referring to models of rational decision-making, we analyze the effects of prior survey experience and generalized attitudes towards surveys as well as their interplay on respondents’ susceptibility to unit nonresponse in an online panel.
Hence, our paper investigates the following four research questions: (i) Does the experience with the prior panel wave influence the chance of participation in the subsequent wave? (ii) Do generalized attitudes towards surveys substantially contribute to the prediction of unit nonresponse? (iii) Does the effect of survey experience vary depending on how strongly these generalized survey attitudes are internalized (moderating effect)? (iv) Are those effects artifacts of continuous participation in a panel or systematic dropout prior to the analyzed period? Therefore, the results of experienced panelists are compared to those of a refreshment sample.
To answer our research questions empirically, we utilize data from the online version of the GESIS-Panel, a bimonthly probabilistic panel drawn from the adult German population. Generalized attitudes towards surveys are measured once a year, whereas respondents evaluate their survey experience at the end of each single survey.
Our empirical results indicate that (i) situational experience as well as (ii) generalized attitudes towards surveys influence unit nonresponse to varying degrees. Furthermore, (iii) we find evidence for the expected moderating effect of generalized attitudes as a frame. We close by giving recommendations to survey design by presenting (iv) varying effects for new and experienced panel participants.
Miss Alice Fitzpatrick (Kantar Public) - Presenting Author
In the absence of an interviewer how can we have confidence that the data is collected from the right individual and is of sufficient quality?
With Address Based Online Surveying (ABOS) methods that invite multiple adults living within a household to take part, there is a risk that one adult completes several questionnaires to obtain a larger incentive. In the absence of an interviewer, data must be verified by other methods.
Assessment of the Community Life Survey suggests that this happens but at a low level (around 5%). Nevertheless, it is important to have robust prevention measures in place to ensure the integrity of the methodology. We have developed a two-staged approach:
1. Prevention
Respondents should understand the importance of the survey and be clear data quality is taken seriously. This can be achieved using a declaration screen where respondents essentially ‘sign’ the work as their own.
2. Post-fieldwork verification
This is possible through two means. The first is to re-contact respondents by telephone to check that the named person completed the questionnaire and to confirm a few characteristics. The second form of verification is to use an algorithm to identify poor data quality afterwards.
The first form of validation was trialled on the Community Life Survey, but the low rate of agreement to re-contact typical of self-completion surveys limited the effectiveness of this approach. In addition, as telephone backchecks are expensive it is only possible to do this for a small sub-set of responders.
As a result, we are largely reliant on the algorithm to identify poor quality data. While there is a lack of evidence of it efficacy, the algorithm is largely built on general understanding of measurement error in a self-completion context.
This paper explores the approach to data verification developed within Kantar Public and shares some of the key
Dr Mansour Fahimi (Ipsos) - Presenting Author
Traditionally, response rates have been relied upon as a singular metric for gaging the quality of survey estimates vis-à-vis bias. However, the growing recognition for the Total Survey Error framework highlights that nonresponse is only one of the many potential threats to survey quality – perhaps accounting for just a small component of total error (Keeter at al. 2000). Additionally, there are studies that suggest response rates may have limited power for predicting bias (Groves et al. 2008). Yet concerned about the legacy stigma attributed to low response rates, researchers often look for creative loopholes for reporting higher rates within the guidelines established by AAPOR (2008) or CASRO (1982). This includes improvised alternatives for estimating the proportion of survey assignees who would be finalized as ineligible, hence boosting response rates, despite their undeterminable final dispositions.
Whether response rates are telling metrics for survey quality or not, it is important for their operational definitions and methods of calculations to be based on sound and transparent principles. With a growing number of researchers now relying on online panels for sample selection and survey administration, some ten years ago DiSogra and Callegaro (2009) proposed a starting method for computing response rates for “probability-based online panels” that deserve recognition as well as refinements.
As with any new methodology, early conceptualizations of response rates for online surveys need examinations for inefficacies and considerations for refinements. It is from this perspective that this work sets out to define a robust and coherent definition, and an associated calculation methodology, for response rates when the underlying sample is probability-based and selected from online panels that are not subject to any systematic exclusion of any subgroups of the target population. Results are cross-examined across a number of samples for validations and empirical justifications.