The impact of questionnaire design on measurements in surveys 4 |
|
Convenor | Dr Natalja Menold (GESIS ) |
Coordinator 1 | Ms Kathrin Bogner (GESIS) |
It has been shown that the length of Likert Scales and the order of the items in a questionnaire can often affect the way people respond. Unfortunately, the impact of item order and scale length have only been explored independently, in separate studies. It has thus not been possible to investigate whether the two may interact with each other. This study uses an elaborate experimental design and appropriate psychometric models to investigate the differential impact of item order and the scale length on the quality of measurement. Suggestions for improved questionnaire design are also discussed.
One of the most common methods for measuring attitudes towards immigration is using different kind of direct questions. The aim of the given study is to test if specific direct questions measuring attitudes towards immigrants are comparable if respondents use an unbalanced scale with 4 answer categories compared with dichotomous response options. The analysis is based on a list experiment conducted within the LISS panel. To test if these two formulations are comparable, the responses with 4 categories are converted into dichotomous form and an equivalence test with the original dichotomous responses is conducted, using latent class analysis (LCA).
This paper investigates a Locus-of-Control scale developed and translated by Gesis, emphasizing the effect of number of response categories on response and data structure when respondents originate from different geographical regions. For internationally oriented universities such as Wageningen University, Locus of Control is a useful concept provided it can be measured by means of a short and simple scale administered as part of a standardized survey in English. Results will be presented based on five versus nine response-categories manipulation, administered to a student sample (N=500 from over 40 countries) in the Falls of 2013 and 2014.
Item-specific response scales are widely used in survey research to measure opinions, attitudes, evaluations or feelings. One of the primary considerations in response scale design is how many categories should be used. We provide evidence about the optimal number of response categories for item-specific scales in terms of reliability and validity. To do so, we analysed Multitrait-Multimethod (MTMM) experiments from the European Social Survey (ESS) Round 6, including more than 25 countries and more than 20 languages.
Respondents rely on both verbal and visual cues in order to interpret the meaning of self-administered questionnaires. About 500 students participating in an admission instrument for the University of Trento took the same instrument first as mail and next as web. In the web version the space among the fully labeled four-response option resulted by not being even, with a larger space between the first and second label, while in the paper version the space was equidistant. The results show a shift of averages and a reduction of variance in the web version when compared to the mail