Improving Response in Government Surveys 2 |
|
Chair | Dr Kerry Levin (Westat ) |
Coordinator 1 | Dr Jocelyn Newsome (Westat) |
Coordinator 2 | Ms Brenda Schafer (Internal Revenue Service) |
Most surveys are limited to a shortened time frame for reasons of expediency and timeliness. The few surveys with longer field periods often release all sample up front, while comparisons of independent surveys are often subject to confounding variables such as differences in protocol, content, sponsors, materials, instruments and so forth. Such factors hinder a straightforward evaluation of seasonal response rates. Our study gauged respondent annoyance on a number of neighborhood environmental factors. Since these attitudes can vary depending upon season our design required a 12-month field period, with periodic sample release, to capture potential seasonal variation. Our population of interest were households exposed to a certain level of noise around selected U.S. airports. Therefore, an address-based sample (ABS) was most appropriate. Using an ABS frame we released sample in 6 waves, each two months apart to provide a full year of data. The mail protocol for each wave took 6 weeks from start to finish providing us with daily responses over the data collection year. Here we present the results of our study showing the response rate impact of season on an ABS mail survey where all other variables remained constant. Our findings were not always in alignment with conventional wisdom. We will provide considerations for timing your mail data collection or adjusting sample to address potential fluctuations.
The ‘German Health Update’ (GEDA) study is a population-based cross-sectional health interview survey conducted on behalf of the German Federal Ministry of Health by the Robert Koch Institute (RKI) in the German adult population. GEDA is one of the three components of the German Federal Health Monitoring program at the national level being operated by the RKI. Three GEDA waves have been carried out as telephone interview surveys between 2009 and 2012 in which more than 60,000 respondents participated.
Because of decreasing response rates and increasing costs a design-switch was made for the most recent GEDA wave (GEDA 2014/2015-EHIS). An informed sequential mixed-mode data collection design was developed based on the experiences gained with two pilot studies (2012 and 2014). Mixed-mode here is defined as using one survey instrument with two or more data collection modes. In GEDA 2014/2015-EHIS two different modes of data collection were used: a self-administered web questionnaire (SAQ-Web) and a self-administered paper questionnaire (SAQ-Paper).
In the first pilot study, conducted in 2012, two different mixed-mode designs were tested: a sequential and a concurrent mixed-mode design. In a feasibility study in 2014 the design was further modified and optimized. Additionally different incentives were used and tested in this survey to find the most effective and cost-efficient strategy to be implemented in future GEDA surveys. The sample was divided into four groups: the first group received a postal stamp with the invitation letter; the second group was guaranteed a 10€-voucher after participating. The third group could take part in a lottery (50€-voucher) and the last (control) group wasn’t offered anything. The results showed that incentives have a positive effect on participation rates. However, the different incentives had very different effects depending on various sociodemographic characteristics of the respondents.
In total, 24.824 questionnaires were completed in the GEDA 2014/2015-EHIS survey (45,3% via web and 54,7% via paper). The response rate was 27,6% (in GEDA 2012 the response rate was 22%) .
In this presentation we will cover the development of the GEDA study, from a telephone to a mixed-mode survey. We will focus on the different measures taken to improve response rates which were tested in two pilot studies and discuss their implications for the main study GEDA 2014/2015-EHIS.
In the face of historically low response rates, researchers have explored whether shorter surveys can reduce nonresponse bias without compromising data quality. Although some studies have shown lengthier surveys encourage satisficing, item non-response, or increased “Don’t Know” responses (Malhotra, 2008; Deuskens et al., 2004; Galesic & Bosnhjak, 2009), a 2011 meta-analysis conducted by Rolstad et al. found that the impact of questionnaire length on data quality was inconclusive.
In this study, we explore whether using a follow-up shortened survey with non-respondents increases survey response in an IRS survey without negatively impacting data quality. The IRS Individual Taxpayer Burden (ITB) Survey measures the time and money respondents spend to comply with tax filing regulations. It is conducted annually with about 20,000 respondents. Data collection includes six contacts: (1) a prenote from the IRS, (2) a survey packet, (3) a reminder postcard, (4) a survey packet, (5) a reminder phone call or postcard, and (6) a survey packet. The survey itself is comprised of 2 critical items that ask directly about time and money, along with 24 other items that provide context to respondents by asking more generally about the tax filing process. The shortened version includes only the time and money items and eliminates the contextual items.
For the 2013 ITB Survey, we conducted an experiment where half of non-respondents received the original, 23-item “long” version of the survey and half received the “short” version for the sixth contact. The results from this experiment suggested that the short form did not impact data quality. Surprisingly, the short form did not improve survey response (Newsome et al, 2015). We hypothesized that response rates may not have improved because the short form was administered as the final contact, when it is more difficult to encourage response. In addition, both surveys were mailed in the same-sized envelope, which meant that respondents might not have even realized they were receiving a shorter version of the questionnaire.
Because the ITB Survey is conducted annually, we had the opportunity to refine our methodology for the 2014 administration. For this administration, all non-respondents received the “short” survey at the sixth contact. Additionally, at the fourth contact, half of non-respondents within select hard-to-reach groups were sent the “short” survey. In an attempt to better distinguish the “short” survey from other mailings in 2014, it was sent in a smaller envelope and included the phrase “short survey” on the survey packet.
In this paper, we examine the impact of the short version on overall response rates, as well as the impact on specific populations that have been historically underrepresented in the survey (e.g., younger adults, low income respondents, and parents with young children). We also assess the impact of the shortened version on data quality. In particular, we are interested in whether removing the contextual items results in respondents giving higher or lower estimates of their time and money burdens as compared to the long version.