Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Innovations in measurement instrument construction for web-based surveys 2 |
|
Convenor | Mr Simon Munzert (University of Konstanz, Germany) |
Large parts of the increasing research body on web-based surveys deal with coverage, sampling and nonresponse issues, and therefore questions of representativeness. Less frequently discussed are measurement issues which arise from the unique way web-based surveys are conducted. In comparison with other modes, web-based surveys provide a bouquet of new tools and methods which allow for previously unknown flexibility in designing measurement instruments: The respondent can be presented (audio-)visual additional to (or even as a substitute of) verbal information, question and item order can be easily randomized, and valuable paradata like response latencies, key stroke measures or server-side paradata can be collected on-the-fly. These tools may help reduce respondent's burden when answering the questionnaire, but also allow for developing completely new instruments of existing concepts (e.g., visual measures of any kinds of knowledge). Although measuring opinions, facts etc. in an online setting might induce additional measurement bias in comparison with other modes, the web survey toolbox may provide instruments which can and should be used to fight these sources of error.
The goal of this panel is to bring together scholars who make use of new web survey tools to improve existing or construct new measures of a variety of concepts. The focus hereby is not so much on purely stylistic adaptions of the questionnaire layout, but on development of new instruments with methods going beyond ordinary question wording or response scale modifications. Papers to be presented in this session might deal with one of the following topics:
- innovative adaptation of existing or development of new instruments in web-based survey setting by use of unique web survey tools
- usage of web survey paradata to reduce survey error, or as a substantive measure
- studies implementing a cross-validation or MTMM design
In most online surveys you will find the large majority of questions to be either multiple choice questions or option/scaling grids. In fact, this has become the way we have come to think about surveys.
One of the problems is that the leading survey systems are strongly modelled after the classic paper based surveys and apart from automatic data entry and some low-level data analysis, very little innovation and very little use of the benefits of the computerised online environment.
This results in poor, or at least uninspired, survey designs and a lean towards questions types that are easy to analyse.
We believe it's time for a change and we do a call for innovation. The market deserves tools that drive creativity and put the user in control. Tools that facilitate and are not restrictive.
:We intend to make a polamic argument and state a number of creative solutions to the problems we have found. We are very open for discussion, in fact we hope to start a long needed brainstorm and discussion about the long needed innovative steps to be taken in this industry.
Since web surveys started developing, quite some literature focused on the potential problems of such surveys in terms of representativeness, coverage and sampling errors. However, there is another important kind of possible errors that should be considered too: measurement errors. Web surveys allow the use of new instruments, new kinds of scales, like drag-and-drop, that are attractive but not very well know yet, mainly in terms of the level of measurement errors they can reach. This paper's goal is to compare different scales in one online panel in Spain, Mexico and Colombia (Netquest panel) by analysing some simple split-ballot experiments but also some split-ballot multitrait-multimethod (SB-MTMM) experiments, which allow getting estimates of the quality of different scales within the same online survey and therefore give us an idea of which scales are performing better.
Web-based surveys provide an ideal opportunity of using multi-media applications such as images, sound, or video. In most cases images, especially header images, are used for aesthetic reasons, to motivate participants by a visually appealing design. Yet the possible impact of visual cues on responses is almost unknown. Research has shown that adding an image to a survey question can, for example, alter the reported frequency of a certain behaviour or influence attitudinal assessments. Most studies in this area are restricted to the effects of different images on a single question.
A current web survey on student housing conducted at the university of Bonn included an experiment on images in the header of each individual page throughout the entire survey. At the beginning of the questionnaire the participants (n=4,674) were randomly divided into four groups. Each group was shown either a picture of a certain dwelling and neighbourhood (deprived, average or upscale) or no image in the header, respectively. An effect can be assumed since the header images are directly related to the presented questions. In this paper I am going to show the effects of different header images on the substantive solution.