Understanding Nonrespondents to Inform Survey Protocol Design 2 |
|
Session Organisers |
Ms Brenda Schafer (Internal Revenue Service) Dr Scott Leary (Internal Revenue Service) Mr Rizwan Javaid (Internal Revenue Service) Mr Pat Langetieg (Internal Revenue Service) Dr Jocelyn Newsome (Westat ) |
Time | Wednesday 17th July, 14:00 - 15:00 |
Room | D20 |
Government-sponsored household surveys continue to face historically low response rates (e.g., Groves, 2011; de Leeuw & de Heer, 2002). Although an increase in nonresponse does not necessarily result in an increase in nonresponse bias, higher response rates can help reduce average nonresponse bias (Brick & Tourangeau, 2017).
One method to address nonresponse is to maximize response, but this approach is often at a high cost with mixed success (Tourangeau & Plewes, 2013; Stoop et al, 2010). A variety of intervention strategies have been used including: offering surveys in multiple survey modes; limiting survey length; offering incentives; making multiple, distinct communication attempts; and targeting messaging to the intended audience (e.g., Dillman et al, 2014; Tourangeau, Brick, Lohr and Li, 2017). A second method to address non-response involves imputations and adjustments after data collection is complete. (Kalton & Flores-Cervantes, 2003) However, the effectiveness of this approach largely depends on what auxiliary variables are used in the nonresponse adjustment models.
Although research has been done to understand nonresponse in surveys, there are still many unanswered questions, such as: What demographic characteristics distinguish nonrespondents from respondents? What socio-economic or other barriers may be contributing to a low response rate? Answering these and similar questions may allow us to tailor survey design and administration protocols to overcome specific barriers that lead to nonresponse. Reducing nonresponse may mean fewer adjustments and imputations after data collection.
This session will focus on understanding characteristics of nonrespondents, barriers to survey response, and how knowledge about nonrespondents can guide survey design protocol. Researchers are invited to submit papers, experiments, pilots, and other approaches on any of the following topics:
• Better understanding how nonrespondents differ from respondents.
• Understanding barriers to response for different subgroups.
• Understanding how nonresponse for different subgroups may have changed over time.
• Using knowledge about nonrespondents to design focused intervention strategies. These could include, but are not limited to tailored messaging, tailored modes of administration, distinct forms of follow-up, and shortened surveys.
• Designing survey protocols to increase response from hard-to-reach populations of interest.
Keywords: Nonresponse, Survey Protocol, Survey Design, Behavioural Insights
Ms Franziska Marcheselli (NatCen Social Research) - Presenting Author
Mrs Katharine Sadler (NatCen Social Research)
Mrs Dhriti Mandalia (NatCen Social Research)
Traditionally, surveys have used postal questionnaires to collect information from teachers in Great Britain. In recent years, the use of online data collection methods has become more widespread. These methods are seen as a way of collecting data more quickly and cheaply than postal methods.
The 2017 Mental Health of Children and Young People (MHCYP) survey collected information about 2-19 year olds living in England. As part of the survey information was collected from young people, their parents, and for children aged 5-16 years old, from their teachers as well. Teachers were asked to complete a self-completion questionnaire that provided crucial information that was needed to help make a clinical assessment of the child’s mental health. The 2017 MHCYP survey dress rehearsal trialled a web-only questionnaire, which teachers accessed via an email. However, the adoption of a web-only data collection strategy posed several challenges and the dress rehearsal achieved a very low response rate. As a result, the data collection strategy was changed for the main stage survey, with a mixed-mode web and paper design being used, in which teachers were sent both an email and a letter with a paper questionnaire. This design also posed challenges.
This paper discusses these challenges, particularly in relation to making contact and encouraging participation in the mixed mode survey, the strategies used to address these challenges and their effectiveness. Findings from an analysis of paradata are used to assess:
• the impact of the initial email, reminder emails and the mailing of paper questionnaires on response rates;
• the impact of length of fieldwork at each wave on response rates to the different mailings;
• the best time of day and of the week to send the web survey invitation.
Dr Jens Ambrasat (German Centre for Higher Education and Science Studies )
Ms Almuth Liletz (German Centre for Higher Education and Science Studies )
Mr Uwe Russ (German Centre for Higher Education and Science Studies ) - Presenting Author
How a survey is perceived and evaluated by the respondents is most important not least for the motivation to participate in the future. We ask what determines this survey evaluation and by which tools it could be improved. According to prior research it should be expected that content and questionnaire length have the strongest impact. In our paper we examine the relations of questionnaire length and content to survey evaluations by looking deeper in the moderating effects of general survey attitudes of the respondents and a promised incentive.
We shed light on these interactional effects relying on recent data of 1.600 doctoral candidates from a web survey of 26 German universities. An implemented experiment randomly assigned participants to a long (55 minutes) or one of two short (35 minutes) versions of the questionnaire. The short versions were designed by split questionnaire design (SQD), thus we get some varying content between these versions. We measured survey evaluation with a multidimensional scale from the GESIS Panel and general survey attitudes with a nine item scale capturing enjoyment, value, and burden.
Results reveal that general survey attitudes have the most impact on survey evaluation and dominate aspects of questionnaire length as well as promised incentives. Remarkably, the longer version of the questionnaire is not rated worse than the shorter ones and even better than one of the shorter versions. We discuss these results in light of a tradeoff between content and questionnaire length.
Dr Rebecca Medway (American Institutes for Research) - Presenting Author
Ms Mahi Megra (American Institutes for Research)
Mr Michael Jackson (American Institutes for Research)
Mrs Cameron McPhee (American Institutes for Research)
Like many other design features, sample members’ preferred response mode is not one-size-fits-all (Olson et al. 2012; Smyth et al. 2014). The lower screener response rate achieved for the “web-push” condition of the 2016 National Household Education Survey (NHES) (as compared to a paper-only condition) suggests there are NHES sample members who would respond to a paper-only survey but not to a web-push one that initially offers web and only later offers paper. This raises the question: Can we identify characteristics of sample members who are more inclined to respond by paper than by web? Can we then use those characteristics to tailor which modes are offered in future administrations?
To address these questions, we used response outcomes from NHES:2016, along with auxiliary data available on the frame, to develop a model of “response mode sensitivity”, calculate a paper sensitivity score for each case (the extent to which a case prefers response paper over web), and confirm that there is a subgroup of cases disproportionately likely to prefer paper. This model was then used to assign sensitivity scores to a random subset of the NHES:2019 sample that was part of a “modeled mode condition”. Within this condition, the cases with the highest sensitivity scores were designated to receive a paper-only protocol, with the remainder receiving web-push.
In this presentation, we will discuss the model we developed, including the most predictive characteristics of paper preference. We also will discuss early results, such as the response rate, representation of hard-to-reach populations, and screener item responses, as compared to two other NHES:2019 conditions where all households were assigned to a paper-only protocol or a web-push protocol. This presentation will be of interest to researchers looking to use tailored mixed-mode designs to maximize response and minimize bias.