How to Deal with "Don't know" and Other Non-Response Codes in Online and Mixed-Mode Surveys 1 |
|
Session Organiser |
Mr Tim Hanson (Kantar Public) |
Time | Wednesday 17th July, 14:00 - 15:00 |
Room | D25 |
Researchers have long debated the treatment of “Don’t know” codes in surveys. Some have argued that a “Don’t know” response is often an expression of satisficing and that the explicit inclusion of this code may not improve data quality (e.g. Krosnick, 2000, 2002). Others have argued that an explicit “Don’t know” option is necessary as there are times when information asked of respondents is unknown; by not offering a clear “Don’t know” option there is a risk of collecting “non-attitudes” (Converse, 1976).
The treatment of “Don’t know” codes has increased in importance with the movement of surveys online, often as part of mixed-mode designs. In an interviewer-administered context the interviewer can code a “Don’t know” response where it is offered spontaneously. This approach cannot be replicated in a self-completion setting (including online surveys), meaning alternative approaches are required, which can impact on the way people respond. Therefore, developing best practice across modes for how to present “Don’t know” and other response options that have traditionally been coded only where spontaneously offered by respondents is becoming an increasing need within survey research.
A range of approaches have been used to deal with “Don’t know” and other non-response codes in online surveys. These include: (1) displaying these codes as part of the main response list so they are always available to respondents; (2) hiding these codes from the initial response list with instructions for respondents on how to select them (e.g. on a hidden ‘second screen’ that can be generated should none of the initial responses fit); and (3) removing these codes altogether from some or all survey questions. All three approaches have potential flaws in terms of comparability with other modes and risks of satisficing behaviours, reporting of non-attitudes and lower data quality. Currently there is no clear consensus among the survey research community over the best approach to take.
We welcome papers that have used different approaches for dealing with “Don’t know” and other non-response codes for online and mixed-mode surveys. Papers that include quantitative experiments or user testing to compare different treatment of these codes are particularly encouraged.
Keywords: Item non-response, Don't know codes, Online, Mixed-mode
Mr Johannes Lemcke (Robert Koch-Institut) - Presenting Author
Mr Stefan Albrecht (Robert Koch-Institut)
Ms Sophie Schertell (Robert Koch-Institut)
Mr Matthias Wetzstein (Robert Koch-Institut)
Background
In survey research item nonresponse is regarded as an important criterion for data quality among other quality indicators (e.g. breakoff rate, straightlining etc.) (Blasius & Thiessen, 2012). This originates from the fact that, as with the unit nonresponse rate, persons who do not answer a specific item can systematically differ from those who do. In online surveys this threat can be countered by using the possibility of prompting after item non-response. In this case prompting means a friendly reminder displayed to the respondent, uniquely inviting him to give his answer. If the respondent does not want to answer it is possible to move on in the questionnaire (soft prompt). The forced choice option however requires a response on every item.
There is still a research gap on the effects of prompting or forced choice options in web surveys on data quality. Tourangeau (2013) also comes to the following conclusion: ‘More research is needed, especially on the potential trade-off between missing data and the quality of the responses when answers are required’.
Methods
We will conduct a methodological experiment using a non-probability sample recruited over a social network platform in January 2019. To test the different prompting options we will implement three experimental groups (forced choice, soft prompt, no prompt) (Total N = 1,200). Besides item-nonresponse rate we will use the following data quality indicators: breakoff rate, straightlining behavior, duration time, tendency to give social desirable answers and self-reported interest.
Results
First pre-test results showed a higher breakoff rate for specific questions where forced choice was applied. Furthermore a higher item nonresponse rate was found for the no prompt option. Final results on the different effects on the data quality will be presented at the conference.
Mrs Jannine van de Maat (SCP) - Presenting Author
That the design of separate questions affects the responses and subsequently the outcome of a survey is an established fact (e.g. Bradburn, Sudman, & Wansink, 2004; Schuman & Presser, 1996). More specifically, there is evidence that offering a don’t know option more explicitly, as a response option or filter question, results in more non-substantive answers, i.e. more item nonresponse (e.g. Bishop, 2005). Respondents may use a don’t know option for various reasons - because they cannot or do not want to answer a survey question or to lower the cognitive burden (e.g. Schuman & Presser, 1996; Krosnick & Presser, 2010) - which results in item nonresponse.
It remains, however, an open empirical question how these non-substantive answers affect the actual distribution of opinions or survey outcome. Using data from a large-scale internet survey experiment in the Netherlands, the effect of varying degrees of non-substantive answers on survey outcomes is examined. By applying various question designs which differ in the possibility for respondents to register a non-substantive (don’t know) answer, the number of non-substantive answers is identified. Subsequently, the resulting distributions of opinions or substantive survey outcomes are examined. The research question is: How does the don’t know option affect substantive survey outcomes?
The aim is not to discuss whether a non-substantive response option should be offered, but to investigate the impact of various ways to register non-substantive answers on the results for the specific substantive response alternatives.
Mr Adam Stefkovics (PhD student) - Presenting Author
Mrs Júlia Koltai (assistant professor)
Mr Zoltán Kmetty (senior lecturer)
The rapidly growing interest in online surveys have raised several methodological concerns on questionnaire design, such as handling ‘do not know’ answers or using “check all that apply” questions. The decisions about questionnaire design may have significant impact on the level of nonresponse and the quality of our data. Despite the importance of these questions there is limited empirical evidence that would help developing best practices. Neither is it clear which are those socio-demographic groups who are more sensitive for questionnaire design effects.
This paper aims to assess how the different strategies of including ‘do not know’ answers, making responses mandatory and specific question formats (check all that apply vs. forced choice formats) affect the level of item- and unit nonresponse and the reliability of the answers. Also, we aim to investigate how socio-demographic characteristics, psychological factors and paradata correlate with questionnaire design effects.
We will use the results of a survey experiment, which will be conducted on a non-probability-based online panel in Hungary, in march 2019. 2000 panel members will be randomly assigned to one of the eight experimental groups, each group will receive different versions of the questionnaire. The questionnaires differ in three design elements: the display of the ‘DK’ answer option (1; offering-not offering), in mandatory response option (2; mandatory-not mandatory) and in question format (3; check all that apply-forced choice). We will use the welfare and politics related questions of the 8’th wave of ESS. One of the eight groups followed the original ESS questionnaire design (control group). The experiment will be pre-registered before the start of the field work.
This experiment adds important evidence on how different strategies on questionnaire design affect survey data quality. Our first results will be presented at the conference, as well as the practical implications of our findings will be discussed.