Cognition in surveys 2 |
|
Convenor | Dr Bregje Holleman (Utrecht University ) |
Coordinator 1 | Dr Naomi Kamoen (Utrecht University/ Tilburg University) |
In surveys, the choice for a positive vs. a negative question wording affects the answers. People are more likely to disagree with negative questions than to agree with positive ones. Do similar effects occur in VAAs? In a naturalistic field experiment during the Dutch local elections, we varied the polarity of VAA statements. Citizens visiting KiesKompas were randomly guided to different versions.
Analyses show that a significant effect of question wording on the distribution of agree-disagree answers as well as on the proportion of non-substantive (no opinion) answers.
This study focuses on examining whether there is a question order effect when a set of question that evaluates different objects are included in a survey. Using data from the American National Election Study, we test whether the order of the questions that evaluate the presidential candidates (the Republican candidate first and the Democrat second or the opposite) affects response. We report findings that indicate Contrast and Assimilation effects depending on respondent's background (partisanship, gender). We conclude with a discussion on the interplay between respondent's emotional predisposition and cognitive process during survey response.
A crucially sensitive topic of Political Surveys is related to voting behavior. Some respondents often try to avoid this kind of question by refusing to answer. This presentation deals precisely with this issue. In fact, it tries to investigate the impact of different factors on such behavioral pattern. Both individual and macro-level data will be used to this end. Indeed, the main objective of the analysis is to show how voters' propensity to declare their vote is not uniquely linked to cognitive predictors like education, but also to the electoral characteristics of the neighborhood in which they are nested.
Drawing on the framework of cognition in surveys, cognitive validity can be described as the degree to which respondents construe survey items as intended by the survey developers. Based on the hypothesis that school self-evaluation surveys are vulnerable concerning the cognitive validity, due to abstract educational concepts and the need for higher-level thinking, this study examines to what extent school self-evaluation surveys are cognitively valid. Results from 20 cognitive interviews with school staff give more insight in the cognitive validity of results out of SSE-instruments, and the issues respondents are struggling with during the answering process.
Voting Advice Applications (VAAs) are very popular web applications offering voting recommendations to their users. These recommendations are based on the attitudes people give to a survey containing policy statements. In a field experiment, we studied to what extent people change their attitude depending on the way these statements are framed, and whether this effect is moderated by attitude strength. VAAs form a context in which response effects like these will have direct practical relevance: if people change their attitude to policy statements they will receive a different voting advice, which has shown to affect vote choice in some circumstances