Experimental designs in online survey research 1 |
|
Convenor | Mr Henning Silber (Göttingen University ) |
Coordinator 1 | Mr Jan Karem Hoehne (Göttingen University) |
Coordinator 2 | Professor Dagmar Krebs (Giessen University) |
Survey researcher are aware of the fact that the design of questions can affect the response behavior of respondent’s. A phenomenon which can occur by answering questions with multiple response categories, are Response-Order-Effects. The bulk of the empirical findings regarding to these effects is based on relatively “indirect data”. Hence, respondent’s behavior is not completely observed, but reconstructed by the researcher. Therefore this paper deals with eye tracking – measuring eye movements allows examining cognitive information processing. The analysis of the eye tracking data show, that respondents pay not the same attention to all response categories.
Recent experimental research has shown that the check-all and forced-choice question formats do not produce comparable results (Smyth et al., 2006).
In this study, half of the respondents (n=42) were assigned to a version of a survey in which two questions were formatted as check-all-that-apply questions and the other half (n=42) were assigned to a version in which the same two questions were formatted as forced-choice questions.
By analyzing the respondents' eye movements, both question formats are compared concerning the amount of attention and cognitive effort respondents spend while answering the questions.
Characteristics of response scales are important factors guiding cognitive processes underlying the choice of a response category in responding to the request for an answer on an attitude item. Additionally, the mode of data collection might be an important factor guiding response behavior. This paper deals with the effect of scale direction within two different modes, a web survey and a paper & pencil-survey. Identical items are presented with response scales of different direction – either beginning on the left hand side with the positive or the negative response option. According to scale direction labels are agree-disagree or disagree-agree.
Online survey experiments with different question forms are frequently employed to evaluate question wordings and response scales. Building on the results of these experiments more accurate wordings and response scales are selected. Yeager et al. (2011) challenged the inconsiderate use of non-probability online samples by showing that different non-probability online samples do not lead to equal conclusions in terms of the distribution of substantive variables. Based on their findings, this paper investigates whether identical split ballot design experiments employed in seven non-probability online surveys reveal equal results.