The impact of questionnaire design on measurements in surveys 2 |
|
Convenor | Dr Natalja Menold (GESIS ) |
Coordinator 1 | Ms Kathrin Bogner (GESIS) |
Survey questions in web surveys often include additional instructions for respondents regarding the use of the scale. Over the whole questionnaire, such instructions can considerably increase the length of the questionnaire. In many cases, however, these scale instructions are unnecessary, because the fully labeled answer scale is provided as part of the visually displayed response options.
We use a survey data to demonstrate that respondents have rational strategies when completing the survey and skip the scale descriptions when they have enough information in the answer scale displayed. Thus, explanations of the scales in web surveys can often be omitted.
One of the main features of web surveys is that they can be interactive and provide various ways to improve the quality of the collected data by including e.g. controls. Some research has been done on how these features can best be implemented in business surveys. However, still many statistical agencies are struggling with how to best design their web questionnaires for business surveys. This paper evaluates how both data providers and the users of the raw data (editors and analysts) evaluate the transition from paper to web in general and the use of controls and codes specifically.
We conducted a methodological experiment in which we systematically manipulated the features of questionnaire items traditionally used to measure perception of and attitudes towards inequalities and redistributive policies. Specifically, we manipulated the order of questions regarding perceived inequalities, support for redistribution, and respondents’ views on tax progressivity. We also manipulated the number and specificity of occupational titles used to assess individual perceptions and legitimacy of income inequalities. The results of the experiment were used to test hypotheses regarding the degree of between-subject agreement, the stability of answers to income inequality questions and the degree of support for redistribution.
This paper investigates whether the position in the questionnaire of non-cognitive tests affects data quality. To test this we randomly assigned two sets of questionnaires, among nearly five thousand individuals: the first having the tests as the sixth module out of 19; and, the second having them as the fifteenth one. This experiment was conducted during the data collection of the first follow-up of a randomized control trial on labor training in Chile. The experimental setting allow us to establish a causal relationship between the position of the tests and data quality.
The aim of this research was to test the influence of question wording on the quality of a survey. In particular, the focus was on errors in attitude measurements due to unfamiliarity or complexity of scientific terms used in questions. In order to measure attitude towards “biodiversity”, the words that respondents associate with biodiversity were first identified, based on statistical analyses of answers to an open-ended question. Items were then composed by these words. These items were tested in two versions of a questionnaire, which has been distributed in January 2015 to urban residents of Geneva (N=2000).