Exploring New Insights into the Measurement and Reduction of Respondent Burden 2 |
|
Session Organisers | Dr Robin Kaplan (Bureau of Labor Statistics) Dr Morgan Earp (Bureau of Labor Statistics) |
Time | Tuesday 16th July, 14:00 - 15:30 |
Room | D04 |
In government surveys, respondent burden is often thought of in terms of objective measures, such as the length of time it takes to complete a survey and number of questions. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of survey length, how effortful the survey is, and how sensitive or invasive the questions are. The level of burden can also vary depending on the mode of data collection, survey topic, demographic group, and frequency with which individuals or businesses are sampled. Ultimately respondent burden is concerning because of its potential impact on measurement error, attrition in panel surveys, survey nonresponse, nonresponse bias, and data quality. Thus, both objective and subjective measures of burden may have effects on survey outcomes, but few studies have explored both types of burden in a single study to better understand the unique contributions each may have. This panel aims to explore new and innovative methods of measuring and mitigating both objective and subjective perceptions of respondent burden, while also assessing the impact of respondent burden on survey response and nonresponse bias. We invite submissions that explore all aspects of respondent burden, including:
(1) The relationship between objective and subjective measures of respondent burden
(2) Qualitative research on respondents’ subjective perception of survey burden
(3) The relationship between respondent burden, response propensity, nonresponse bias, response rates, item non-response, and other data quality outcomes
(4) Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
(5) Differences in respondent burden across different survey modes
(6) Measurement of multiple components of respondent burden, including effort, sensitivity, how easy or difficult the questions are to answer, interest, or motivation
(7) The use of alternative data sources to reduce burden
(8) Burden involved in data collection efforts, including survey organization contact attempts, reporting burden for establishment surveys, or proxy reporting in household surveys
(9) Measurement of respondent burden in populations that are more frequently surveyed than others
Keywords: Respondent burden, subjective burden, data quality, burden reduction
Dr Annette Jeneson (Regional Center for Child and Adolescent Mental Health, Eastern and Southern Norway) - Presenting Author
Mr Knut-Petter Leinan (Regional Center for Child and Adolescent Mental Health, Eastern and Southern Norway)
Mr Ole-Martin Vangen (Regional Center for Child and Adolescent Mental Health, Eastern and Southern Norway)
Dr Tore Wentzel-Larsen (Regional Center for Child and Adolescent Mental Health, Eastern and Southern Norway)
Background
In long web surveys, data quality may suffer as respondents become tired of the task and fatigue sets in toward later sections of the survey. In our study settings, respondents typically complete web surveys as part of larger research study about children or youth mental health. Based on relatively high response- and completion rates, they appear more dedicated to the task compared to for example respondents answering a general market research survey. There is little data on the effect of respondent fatigue in our study population.
Objective
Investigate to what extent respondents in our kinds of study settings show respondent fatigue, as well as their overall experience answering a 30-minute web survey. Answers to these questions might guide future survey design, e.g., is there a point at which respondent fatigue becomes evident, and are longer surveys best divided into two shorter surveys?
Methods
We are conducting a “trial within a trial” in web surveys sent to parents of young children participating in an effect study of a program aimed to increase the quality of day care centers in Norway. To tease apart possible effects of the questionnaire per se versus order in the survey, respondents are randomly assigned to one of the two different orders of questionnaires. We measure the effect of order on data quality (time spent per questionnaire, rate of missing data, level of differentiation, break-off rate, and number of items selected from a long word-list with multiple possible answers). We also ask participants about their overall experience answering the survey.
Results
Recruitment is currently ongoing. Based on the response so far, we expect approximately 2300 completed surveys.
Conclusion
The goal of this experiment, and related experiments, is to guide web survey practices to yield good data quality as well as well as good user experiences.
Mr Harry Ganzeboom (Vu University Amsterdam) - Presenting Author
Mr Derk Sparendam (Vu University Amsterdam)
In Quality & Quantity (47,4) Jansen, Verhoeven, Robert & Dessens (2013) [hereafter: JVRD], contribute an interesting analysis of two competing modes of (personal income), using an experimental design in the 2003 Hungarian Household panel, in which two modes of measurement (SHORT: a single-shot open question + supplementary showcard; LONG: 11 two-step questions about income components). In their analysis JVRD conclude that the LONG version is to be preferred, in particular because the SHORT questions systematically underreport true income. In a reanalysis of the JVRD data we distinguish between systematic error (invalidity) and random error (unreliability) and show that JVRD have overlooked severe reliability problems with the LONG version, which are produced by filtering structure of the LONG format and ensuing missing value problems. As a consequence, the LONG question format underestimates any correlation with income by about 14%. Our recommendation to survey researchers is to use the SHORT format.
Dr James Dahlhamer (National Center for Health Statistics) - Presenting Author
Dr Aaron Maitland (National Center for Health Statistics)
Dr Benjamin Zablotsky (National Center for Health Statistics)
The U.S. National Health Interview Survey (NHIS) has undergone a questionnaire redesign, with a new survey instrument slated for full release in January of 2019. Goals of the redesign include improved measurement of covered health topics, harmonizing overlapping content with other federal health surveys, and reducing respondent burden by shortening the length of the questionnaire. The latter is of particular importance as NHIS interviewers have long complained about the length of the interview and the obstacles it presents in securing participation. However, while reducing the length of the interview is an important step, research has shown that burden is a multidimensional, subjective phenomenon, “the product of an interaction between the nature of the task and the way it is perceived by the respondent” (Bradburn, 1978: 36). Length of interview alone may be a poor proxy for respondent burden.
In quarter four of 2018, the 2018 and redesigned instruments were fielded simultaneously, with sample addresses randomly assigned to one of the two instruments. To assess the impact of the redesigned instrument on respondent burden, three questions were asked at the end of each instrument after a fully complete interview. The first question asked for the respondent’s assessment of how burdensome they found the interview, while the second and third questions captured respondent perceptions of survey content and task (ease or difficulty in answering questions, sensitivity of questions). Based on these questions, we compare respondent assessments of burden across the two instruments overall and for select subgroups. In multivariate analysis, we assess the impact of instrument version on respondent burden controlling for measures tapping constructs outlined by Fricker et al. (2014): motivation, recruitment effort, task difficulty, perception of survey content, and perception of survey task. We discuss the implications of our findings in the context of redesigning a large-scale government survey.
Mrs Silke Martin (GESIS Leibniz Institute for the Social Sciences) - Presenting Author
Dr Clemens Lechner (GESIS Leibniz Institute for the Social Sciences)
Professor Corinna Kleinert (Leibniz Institute for Educational Trajectories)
Professor Beatrice Rammstedt (GESIS Leibniz Institute for the Social Sciences)
Interview situations can be perceived as burdensome for some respondents. The burden can be caused by the large number of questions or difficulties in answering the questions. In fact, there is no objective concept for operationalizing respondent burden. In the absence of other measures, the interview duration, the number of questions, or the degree of item nonresponse is often used. In longitudinal surveys, the burden perceived in the previous wave may contribute to the decision for or against further participation in the follow-up wave. Those who have enjoyed the interview and experienced positive feelings may be more willing to continue in the panel. Others who have experienced frustration, embarrassment or cognitive overload are, in contrast, at risk of refusing further participation.
In our study, we used data from two large-scale longitudinal assessment surveys in Germany, ALWA/NEPS and PIAAC/PIAAC-L, which addressed similar target populations, used similar survey instruments, and had comparable designs. These surveys include the administration of a time-consuming assessment in which respondents had to work on a number of cognitive tasks of varying degrees of difficulty. We assume that respondents with low cognitive skills were more likely to perceive the test situation as burdensome and may have experienced negative feelings, such as frustration, lack of motivation or embarrassment. We therefore hypothesize that respondents with lower cognitive skills are more likely than those with higher cognitive skills to refuse participation in the follow-up wave in order to avoid a comparable situation in the future. The results of both surveys provide evidence for our hypothesis, even after controlling for a number of other factors that previous research has identified as predictors of nonresponse (especially education).