Decreasing measurement error for web-surveys |
|
Session Organisers | Dr Verena Ortmanns (GESIS - Leibniz Institute for the Social Sciences) Dr Ranjit Singh (GESIS - Leibniz Institute for the Social Sciences) Ms Patricia Hadler (GESIS - Leibniz Institute for the Social Sciences) Dr Cornelia Neuert (GESIS - Leibniz Institute for the Social Sciences) |
Time | Friday 9 July, 16:45 - 18:00 |
Technological change and the increasing level of digitalization are transforming survey research practice. Nevertheless, collecting high-quality data remains a central aim for survey research. And so the fight against errors of measurement and representation enters a new arena. This session aims to present current research on different approaches to decreasing measurement errors in web surveys. The talks in this session cover a broad range of issues, such as data quality in probability and non-probability online panels, the potential of paradata to detect measurement error, questionnaire design and linguistic aspects in web surveys, and mode effects on sensitive questions. The session is of interest to both researchers involved in large survey programs and those interested in fielding smaller web surveys, and to all researchers who want to assess the data quality of data collected via web surveys.
Keywords: web-surveys, data qualitiy, technological change
Ms Irina Bauer (GESIS Leibniz Institute for the Social Sciences) - Presenting Author
Dr Tanja Kunz (GESIS Leibniz Institute for the Social Sciences)
Dr Tobias Gummer (GESIS Leibniz Institute for the Social Sciences)
Comprehending survey questions is an essential step in the cognitive response process that respondents go through when answering questions. Respondents who have difficulty understanding survey questions may not answer at all, drop out of the survey, give random answers, or take shortcuts in the cognitive response process – all of which can decrease data quality. Comprehension problems are especially likely among respondents with low literacy skills. The 2018 LEO survey estimates the proportion of low literacy among the population in Germany at 12 percent. ‘Simple Language’ in terms of clear, concise, and uncomplicated language for survey questions may help mitigate comprehension problems and thus increase data quality. ‘Simple Language’ is a linguistically simplified version of standard language and is characterized by short and concise sentences with a simple syntax avoiding foreign words, metaphors or abstract concepts. In order to investigate the impact of ‘Simple Language’ on data quality, we conducted a web survey of 10 minutes length among 4,000 respondents of an online access panel in Germany. Respondents were given a questionnaire that used either ‘Simple Language’ or ‘Standard Language’. The assignment to one of the two groups was made randomly. We examine various indicators of data quality, including “don’t know” responses and nondifferentiation. In addition, we investigate various aspects of respondents’ survey assessment. Since data collection has just been completed in December, results are not available yet. However, we assume the use of ‘Simple Language’ to have a positive effect on data quality and survey assessment. We expect this effect to be especially pronounced for subgroups that are more likely to be of low literacy, such as people with a lower level of formal education or those who have a native language other than German.
Mr Daniil Lebedev (National Research University Higher School of Economics) - Presenting Author
The widespread use of online methods of data collection allows to collect and analyse paradata - information obtained in the process of data collection, including records about the characteristics / behavior of the interviewer and the respondent, as well as the situation of the interview as a whole. Research in the use of paradata in terms of evaluating and improving the survey data quality lacks a structure with connection of all available types of paradata with possible erroneous situations during survey completion process. In practice, researchers tend to choose a separate analysis of different paradata types, which narrows the possibilities of assessing and reducing measurement error. The research question is as follows: how different types of paradata and their combinations can be used to evaluate and reduce measurement error within web surveys?
In this paper we present results of web experiment with two experimental groups. Participants were asked to either fill out the online survey as quickly as possible, with low motivation to provide accurate data (“satisficing” experimental condition) or fill out the survey most accurately (“optimizing”). For each participant a wide range of paradata was collected during survvey completion process including mouse movements, change of browser focus, latency and others with the use of One Click Survey web software with beta version of advanced paradata collection tools (www.1ka.si/d/en). As a result, 97 students participated allowing to compare the data quality and explore the possibilities of paradata employment in particular combination of different paradata types to detect potentially erroneous situations leading to increase in measurement error.
Dr Zeina Mneimneh (University of Michigan) - Presenting Author
Mrs Jennifer Kelley (University of Michigan)
Dr Yasmin Altwaijri (King Faisal Specialist Hospital and Research Center)
The use of audio computer-administered survey interview (ACASI) mode to reduce reporting bias for sensitive information is well documented in Western countries (Couper, Singer, and Tourangeau, 2003; Epstein, Barker, and Kroutil, 2001; Lindberg and Scott, 2018). Other parts of the world are also increasingly adopting ACASI (Langhaug, Sherr, Cowan, 2010; Mensch, Hewett, and Erulkar, 2003). Yet, one region where ACASI has not been adopted yet is the Arabian Gulf. Understanding respondents' willingness to engage in an ACASI administration and its effect on improving the reporting of sensitive information in the Arabian Gulf is essential as cross-national studies that emphasize data comparability expand to include more countries worldwide. The issue of willingness to engage in an ACASI administration is important to explore given the novelty of the approach in this culture for respondents and interviewers. If many respondents refuse to use ACASI, the mode's effectiveness in improving the reporting of sensitive information could be jeopardized.
This paper examines respondents’ willingness to engage in ACASI administration and its effect on reporting sensitive information in the Saudi National Mental Health Survey (SNMHS). The SNMHS is the first national mental health survey in the Kingdom of Saudi Arabia (KSA) and is part of the cross-national World Mental Health (WMH) Initiative. All completed 4004 interviews were conducted face-to-face by trained interviewers who were gender-matched to respondents. Computer administrated personal interview (CAPI) was the main mode for the majority of the sections, and two separate ACASI administrations supplemented it (given the sensitive nature of many assessed topics). The first implementation included questions on suicide and marital relationship and was offered to all respondents in an ACASI mode with the option to switch to a CAPI mode. The second administration included questions on attitude towards substance and alcohol use, and conduct disorder behaviors and was randomly assigned to an ACASI mode for half of the sample and CAPI for the other half. Using the first administration, we will present findings on the rate of refusal to engage in ACASI (and to switch to a CAPI mode) and what respondent and interviewer level characteristics are associated with this switch. Using the second administration, we will explore further the effect of ACASI (compared to CAPI) on reporting sensitive information related to alcohol and drug use and engaging in conduct disorder behaviors. Differences in the rate of endorsing sensitive attitudes or behaviors and missing data rates will be compared between the ACASI and the CAPI modes.