All time references are in CEST
Innovations in the conceptualization, measurement, and reduction of respondent burden 1 |
|
Session Organisers | Dr Douglas Williams (U.S. Bureau of Labor Statistics) Dr Robin Kaplan (U.S. Bureau of Labor Statistics) |
Time | Thursday 20 July, 14:00 - 15:30 |
Room | U6-20 |
In an era of declining response rates, increasing use of multiple survey modes, and difficulties retaining respondents across multiple survey waves, the question of how to better understand, measure, and reduce respondent burden is crucial. In official statistics, respondent burden is often conceptualized in terms of objective measures, such as the length of time it takes to complete a survey and the number of questions asked. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of how effortful the survey is, how sensitive or invasive the questions are, and how long the survey is. The level of burden can also vary by the mode of data collection, survey characteristics, demographic and household characteristics of respondents, and the frequency with which individuals or businesses are sampled. Ultimately, respondent burden is concerning because of its potential to increase measurement error, attrition in panel surveys, survey nonresponse, and nonresponse bias, as well as impact data quality. Building on the recent Journal of Official Statistics Special Issue on Respondent Burden, we invite papers on new and innovative methods of measuring both objective and subjective perceptions of respondent burden, while also assessing and mitigating the impact of respondent burden on survey response and nonresponse bias. We welcome submissions that explore the following topics:
• The relationship between objective and subjective measures of respondent burden
• Strategies to assess or mitigate the impact of respondent burden
• Quantitative or qualitative research on respondents’ subjective perceptions of survey burden
• The relationship between respondent burden, response propensity, nonresponse bias, response rates, item nonresponse, and other data quality measures
• Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
• Differences in respondent burden across different survey modes
Keywords: Respondent burden, data quality, item nonresponse
Dr Andy Peytchev (RTI) - Presenting Author
Dr Emilia Peytcheva (RTTI)
Dr David Wilson (RTI)
Mr Darryl Creel (RTI)
Mr Darryl Cooney (RTI)
Mr Jeremy Porter (RTI)
There are substantial reasons to reduce the length of self-administered surveys, including to minimize nonresponse, to limit breakoffs, and perhaps even more than anything else, to reduce measurement error in the collected data (Peytchev and Peytcheva, 2017). Split Questionnaire Design (SQD) (Raghunathan and Grizzle, 1995) gives survey designers an option to achieve a complete dataset with all variables, without asking all questions from each respondent. In SQD, multiple splits of the questionnaire are created in a manner that allows all possible combinations of variables to be observed at least for part of the sample. The data for the omitted questions for each respondent are imputed. To propagate the uncertainty related to each imputed value, multiple imputation is employed.
Among the obstacles for full scale implementation on large-scale surveys is the need for development and comparison of alternative approaches to how SQD is implemented, and the need for its evaluation on survey data. Two critical steps in SQD are essential to its performance: creation of the splits and imputation of the omitted data. The project first developed two sets of questionnaire splits: (1) based on cognitive aspects of questionnaire design, and (2) balancing the cognitive considerations with the need to maximize correlations across modules to aid imputation. Then, data were deleted for randomly assigned groups and imputed for each set of questionnaire splits using two fundamentally different imputation approaches: (1) regression-based multiple imputation, and (2) weighted sequential hot deck multiple imputation. This 2x2 design was evaluated on data from the 2019 National Survey of College Graduates in the United States. The evaluation criteria include bias and variance in a variety of estimates, comparing the approaches to creation of the splits and the imputation method. We present the design, main challenges, and key results from this two-year study.
Dr Ting Yan (Westat) - Presenting Author
Mr Douglas Williams (Bureau of Labor Statistics)
Concerns about the burden that surveys place on respondents have a long history in the survey field. This article reviews existing conceptualizations and measurements of response burden in the survey literature. Instead of conceptualizing response burden as a one-time overall outcome, we expand the conceptual framework of response burden by positing response burden as reflecting a continuous evaluation of the requirements imposed on respondents throughout the survey process. We specifically distinguish response burden at three timepoints: initial burden at the time of the survey request, cumulative burden that respondents experience after starting the interview, and continuous burden for those asked to participate in a later round of interviews in a longitudinal setting. At each time point, survey and question features affect response burden. In addition, respondent characteristics can affect response burden directly, or they can moderate or mediate the relationship between survey and question characteristics and the end perception of burden. Our conceptual framework reflects the dynamic and complex interactive nature of response burden at different time points over the course of a survey. We show how this framework can be used to explain conflicting empirical findings and guide methodological research.
Dr Christopher Antoun (University of Maryland) - Presenting Author
Ms Xin (Rosalynn) Yang (University of Maryland)
Dr Brady West (University of Michigan)
Dr Jennifer Sinibaldi (Pennsylvania State University )
While most surveys prompt respondents to complete the entire questionnaire in one sitting, there may be potential benefits of dividing surveys into shorter parts (or modules) that a respondent can complete at different points in time at their convenience. However, the existing research does not compare different modular design techniques, nor how they can be implemented via smartphones. To address these questions, we first developed an Apple iOS smartphone app (“Smartphone Surveys”) that can deploy modular surveys and then conducted an experiment to compare different modular and non-modular formats to a conventional web survey. In total, 664 people -- recruited from a previous National Center for Science and Engineering Statistics survey and the Forthright online volunteer panel -- were randomly assigned to answer 65 questions about employment and the economy (divided into 7 modules) using one of four methods: 1. Modular, where all modules were available at once via the app; 2. Modular, with modules time-released (one module every other day) via the app; 3. Non-modular, with all questions administered at once via the app; and 4. Non-modular, using a standard web survey (the control group). We will compare the effects of these approaches on perceived burden as well as several indicators of response quality (missing data, straightlining, lengths of answers to open questions, and rounded answers). Although preliminary results indicate few differences between the modular and non-modular app-based approaches, we find some important differences between the app-based approaches and the web survey. For example, the app-based approaches led to higher quality data by two metrics (less straightlining, longer responses to open questions) than the web survey. In addition, respondents rated the app-based approaches as easier than the web survey, and this pattern holds in multivariable models adjusting for demographic variables.
Dr Philip Brenner (Utrecht University) - Presenting Author
Dr Lee Hargraves (University of Massachusetts Boston)
Ms Carol Cosenza (University of Massachusetts Boston)
The high demand for cost-effective survey designs has been an impetus for methodological and technological innovation in online and mobile surveys. Yet, taking advantage of these innovations may encounter an unintended consequence: high respondent burden. Long and complex self-administered surveys, such as those conducted on the Web or SMS, may cause fatigue and breakoffs that can harm data quality. Thus, we test a planned missing design— randomly assigning respondents to answer only a subset of questions to shorten the survey—to reduce respondent burden in Web and SMS administrations of the CAHPS Clinician & Group Survey (CG-CAHPS), a survey of patient experiences widely used by health care providers. Members of an online nonprobability panel were randomly assigned to one of three invitation and data collection mode protocols: email invitation to a Web survey, SMS invitation to a Web survey, or SMS invitation to an SMS survey. Within these three mode protocols, respondents were randomly assigned to a planned missing design, which shortened the survey by about 40 percent, or to a control group that received the survey in its entirety. We compare survey duration, breakoff and completion rates, and five key patient experience measures across conditions to assess the effect of the planned missing design across the three modes. We found that a planned missing design worked well with our Web survey, reducing survey duration and breakoff without changing estimates relative to the full-survey control condition. However, mixed findings in the SMS survey suggest that even shortened, 15-item surveys may be too long to substantially reduce respondent burden. We conclude with recommendations for future research.
Mr Douglas Williams (U.S. Bureau of Labor Statistics) - Presenting Author
Mrs Sharon Stang (U.S. Bureau of Labor Statistics)
Ms Faith Ulrich (U.S. Bureau of Labor Statistics)
Questionnaire length is an often-used metric of survey burden. However, the relationship between survey participation and questionnaire length is generally weak. This is due to other factors that mediate how burden is perceived, such as survey interest, sponsor, topic, or intrinsic factors like respondent motivation. Additionally, survey researchers go to great efforts to minimize burden, while for sampled survey members, the decision to participate is made ahead of any experience with the survey. Despite this, questionnaire length can affect effort later in the questionnaire, resulting in satisficing, item-nonresponse, or survey breakoff. In this paper we examine the effect of increasing questionnaire length in an establishment survey on survey outcomes including unit response, item-nonresponse, and data quality. We explore this in the Business Response Survey (BRS), conducted by the Bureau of Labor Statistics. The BRS was designed to be a supplemental survey administered online to provide quick measurement of emerging issues affecting businesses. The survey has been administered yearly since 2020, with the number and complexity of survey questions increasing each year. Complexity was increased with the inclusion of questions that require calculation, or accessing business records. Despite a nearly three-fold increase in survey questions, the survey remained modestly short and response remained steady year to year at about 25 percent. We expand upon this, reporting on response distributions, by contact number, and establishment factors (e.g., size, industry), by survey length to examine any effects on data quality or breakoffs. This paper adds to the debate on how burden manifests as survey length and complexity increases.