ESRA logo

ESRA 2019 full progam


Monday 15th July Tuesday 16th July Wednesday 17th July Thursday 18th July Friday 19th July


Exploring New Insights into the Measurement and Reduction of Respondent Burden 3

Session Organisers Dr Robin Kaplan (Bureau of Labor Statistics)
Dr Morgan Earp (Bureau of Labor Statistics)
TimeWednesday 17th July, 09:00 - 10:30
Room D04

In government surveys, respondent burden is often thought of in terms of objective measures, such as the length of time it takes to complete a survey and number of questions. Bradburn (1978) posited that in addition to these objective measures, burden can be thought of as a multidimensional concept that can include respondents’ subjective perceptions of survey length, how effortful the survey is, and how sensitive or invasive the questions are. The level of burden can also vary depending on the mode of data collection, survey topic, demographic group, and frequency with which individuals or businesses are sampled. Ultimately respondent burden is concerning because of its potential impact on measurement error, attrition in panel surveys, survey nonresponse, nonresponse bias, and data quality. Thus, both objective and subjective measures of burden may have effects on survey outcomes, but few studies have explored both types of burden in a single study to better understand the unique contributions each may have. This panel aims to explore new and innovative methods of measuring and mitigating both objective and subjective perceptions of respondent burden, while also assessing the impact of respondent burden on survey response and nonresponse bias. We invite submissions that explore all aspects of respondent burden, including:
(1) The relationship between objective and subjective measures of respondent burden
(2) Qualitative research on respondents’ subjective perception of survey burden
(3) The relationship between respondent burden, response propensity, nonresponse bias, response rates, item non-response, and other data quality outcomes
(4) Sampling techniques, survey design, use of survey paradata, and other methodologies to help measure and reduce respondent burden
(5) Differences in respondent burden across different survey modes
(6) Measurement of multiple components of respondent burden, including effort, sensitivity, how easy or difficult the questions are to answer, interest, or motivation
(7) The use of alternative data sources to reduce burden
(8) Burden involved in data collection efforts, including survey organization contact attempts, reporting burden for establishment surveys, or proxy reporting in household surveys
(9) Measurement of respondent burden in populations that are more frequently surveyed than others

Keywords: Respondent burden, subjective burden, data quality, burden reduction

Using Feedback from Household Respondents to Identify Potential Sources of Subjective Burden in Household Surveys

Ms Jessica Holzberg (U.S. Census Bureau) - Presenting Author
Mr Jonathan Katz (U.S. Census Bureau)
Ms Mary Davis (U.S. Census Bureau)

In 1978, Norman Bradburn defined "respondent burden" as a multi-faceted concept that is "the product of an interaction between the nature of the task [objective burden] and the way it is perceived by the respondent [subjective burden]." In practice, burden has been defined in a variety of ways in the years since, with objective measures of burden receiving considerably more attention than subjective measures. Because of this gap in the study of burden, there has been an increase in recent efforts to measure subjective perceptions of burden, alone and in combination with objective measures. As part of these efforts, researchers have conducted qualitative research with respondents to learn more about their perceptions of survey burden. However, the majority of this research has been conducted with business survey respondents. While some findings may also apply to household surveys, there are numerous differences between business and household surveys that may affect perceptions of burden.
In this paper, we provide an overview of a multi-phase project conducted to learn about respondents’ perceptions of the burden of the American Community Survey (ACS), the premier source for detailed population and housing information in the United States, and to get feedback on survey questions to measure those perceptions. We conducted 10 focus groups and 62 cognitive interviews which were segmented by response mode (self-response versus response with an interviewer) and included respondents with diverse demographic and household characteristics. We describe findings on how burdensome respondents found the ACS, as well as the survey features and respondent characteristics that contributed to those perceptions. We conclude with suggestions for researchers interested in designing measures of subjective burden perceptions.


What Does it Mean to be Burdened?: Exploring Subjective Perceptions of Burden

Dr Robin Kaplan (Bureau of Labor Statistics) - Presenting Author

Download presentation

Respondent burden in U.S. government surveys is often defined by objective measures such as estimated time to complete a survey. Respondents’ subjective perceptions of the survey task (e.g., effort, sensitivity, and interest in the survey topic) have received less attention. Additionally, other aspects of the survey process that respondents experience, including contact attempts from the survey agency, and answering as a proxy for one’s household, have been relatively unexplored as features of burden that could affect response rates and data quality. We conducted exploratory research to examine these questions, including subjective perceptions of survey burden, what survey features contribute to burden, and the impact of proxy reporting on level of burden. We conducted an online survey (n=171) where participants answered questions about employment and demographics typically included on U.S. government surveys, answering for themselves and as a proxy for other one randomly selected household member. Half of the participants were led to believe the survey would be highly burdensome (8 waves of data collection), and half were told it would low burden (a one-time survey). Afterward, participants completed open-ended questions about their experience completing the survey and rated their level of burden, sensitivity, and difficulty while completing the survey. Results showed that participants found it more burdensome and difficult to answer for other household members, but more sensitive to respond for themselves. In terms of proxy response, participants found it more sensitive to respond for a grandchild or parent, and more difficult to answer for other relatives, non-relatives, or siblings. In exploring participants’ subjective perceptions of burden, they described the survey as “easy, good, and normal” indicating that the term “burden” may not accurately reflect their survey experience. We discuss the implications of this research for furthering our understanding of respondent burden and how to measure it.


Perceived and Actual Respondent Burden and the Effects on Data Quality in Web Surveys

Ms Tanja Kunz (GESIS - Leibniz-Institute for the Social Sciences) - Presenting Author
Mr Tobias Gummer (GESIS - Leibniz-Institute for the Social Sciences)

Questionnaire length has been identified as a key factor affecting data quality. A larger number of questions and the associated respondent burden are deemed to lower the respondents’ motivation to thoroughly process the questions. Thus, respondent burden increasing with each additional question respondents have to work through is likely to lower data quality. However, only little is known so far about the relationship between actual and perceived respondent burden, how this relationship may change over the course of questionnaire completion, and how data quality is affected by this depending on the relative position of the question within the questionnaire. To address these questions, a web survey is conducted among respondents of an online access panel using a questionnaire of 25-29 minutes length. The question order is fully randomized, allowing the effects of question position on data quality to be disentangled from the effects of content, format, and difficulty of individual questions. Among these randomly ordered survey questions, a block of evaluation questions measuring the respondents’ perceived burden, engagement, and effort is asked several times. Due to complete randomization of the survey questions and by repeatedly asking the evaluation questions, changes in the actual and perceived respondent burden over the course of questionnaire completion and its effect on data quality can systematically be examined. Several indicators of data quality are taken into account; among others, don’t know responses, nondifferentiation, attention check failure, and length of answers to open-ended questions. In addition, paradata are used as proxy measures of respondent burden. This study provides evidence on how perceived respondent burden develops over the course of the questionnaire and how it is related to the level of actual respondent burden. In this respect, the present study contributes to a better understanding of previous evidence on lower data quality in later parts of questionnaires.


Exploring Ways to Accurately Estimate Burden, Based on the Respondents Perceived Survey Length

Miss Anna Hamelin (U.S. Energy Information Administration) - Presenting Author

Accurately estimating burden per response on establishment surveys is a difficult task. According to the Paperwork Reduction Act regulations in 5 CFR 1320(b)(1), nine components of burden need to be evaluated for estimating reporting burden. Some of the components are easily understood while other components are no longer applicable or confusingly similar to other components. Respondents are able to provide quantifiable responses to some overarching components while other discrete sub-components are not easily measureable and result in respondent confusion or errors in burden estimates. These nine components that OMB lists provides one kind of framework for measuring burden. There are other approaches to consider when estimating the burden per response on an establishment survey.
Advances in technology has affected the way respondents view burden. For some respondents technology has reduced the response burden. However for other respondents, the reporting burden increases because respondents have to extract information from multiple databases.
Another issue that arises when asking companies to estimate the time it takes to complete a survey is their willingness to participate. Companies that do not want to participate or want to give less data tend to report extremely large burden estimates. When probing on burden estimates in cognitive research it is important to understand when a company is providing inaccurate estimates or inflating their burden estimates. Respondents’ perceptions of survey length increases with more sensitive questions. It also increases when there are issues with data quality and they are receiving follow up phone calls to check on whether their data is correct.
This paper will cover results from multiple cognitive tests that asked participants to estimate their burden per response to energy surveys and lessons learned from each individual project. There will be some recommendations on how to improve estimating burden per response in an electronic data collection environment.