Cognition in surveys |
|
Chair | Dr Bregje Holleman (Utrecht University ) |
Coordinator 1 | Dr Naomi Kamoen (Tilburg University) |
Cognitive dual-process models of response behavior distinguish between two groups of respondents: those giving answers based on simple decision heuristics and automatic-spontaneous cognitive processes, which is typically associated with short response latencies, and those giving answers based on deliberative thoughts, which is associated with long response latencies. Empirical studies show that both groups are susceptible to different types of response effects such as acquiescence effects for quick responders or contrast effects of question order for slow responders.
For fast respondents’, the chronic attitude accessibility is assumed to be a moderator of the attitude-response process: if the accessibility is high, respondents’ will answer based on their attitudes. If it is low, they will answer bases on simple decision heuristics or situational cues (Fazio 1990). We assume that those respondents who give automatic-spontaneous answers without chronic attitude accessibility are those who will most likely be affected by response effects which demand lower levels of elaboration (like acquiescence effects).
Furthermore, respondents’ can be distinguished based on their general attitude towards sur-veys, which leads to a specific role in surveys. Those roles can either be cooperative, which means respondents’ try to answer every question as true as possible, or the role can be con-forming, which means respondents’ are using cost-benefit-considerations when answering questions, which often leads to a biased answer (Stocké 2004). Since those considerations presuppose a higher level of elaboration, we suppose that the general attitude towards surveys is a moderator only for slow responses. Therefore, response effects which demand higher lev-els of elaboration (like the contrast effect of question order) should be observable especially for slow responders with a negative general attitude towards surveys.
Thus, we propose that there is a general association between specific types of response effects, the response latencies and the respondents’ attitudes in surveys, based on an extended dual-process model of response behavior. A respondent’s degree of answer elaboration, the general attitude towards surveys and the degree of chronic accessibility of the research case are all predictors for specific types of response effects.
To examine this assumption, we investigate the link between the general attitude towards sur-veys, the attitude accessibility, the level of a responders answer elaboration and the occurance of response effects (in particular the acquiescence effect and the assimilation and contrast ef-fect of question order) to explain method effects according to dual-process models. For this examination, we will use the data of 1.) a paper and pencil evaluation project conducted 2014 to 2016; 2.) a web survey among students in 2016 and 3.) a German longitudinal mixed mode study, the GESIS panel.
Sources:
Fazio, R.H. (1990): Multiple Processes by which Attitudes guide Behavior: the MODE Model as an integrative framework. Advances in Experimental Social Psychology 23, 75-109.
Stocké, V. (2004): Entstehungsbedingungen von Antwortverzerrungen durch soziale Er-wünschtheit. Ein Vergleich der Prognosen der Rational-Choice Theorie und des Modells der Frame-Selektion. Zeitschrift für Soziologie 33, 303–320.
Survey methodological research shows over and again that contrastive wordings in attitude questions affect the answers obtained. Rugg (1940) was the first to establish that a question about freedom of speech phrased with the verb ‘allow’ elicited more ‘no’-answers compared to the number of ‘yes’-answers to the opposite question with ‘forbid’. Hence, respondents’ evaluations of free speech seemed more positive when a negative question had been asked.
Explanations have been focusing on a difference in connotations of positive and negative wordings (Schuman & Presser 1981; Holleman 2000). Another type of explanation for these wording effects can be derived from dual-route theories of information processing. Such theories (e.g., the ELM by Petty & Cacioppo or the satisficing model by Krosnick) proposed that people with strong attitudes, tend to process information about that issue more deeply, whereas people with weak attitudes tend to perform shallow or heuristic processing. By doing so, this latter group will be more susceptible to superficial characteristics of the way the information is conveyed (e.g., wording, or source credibility).
While theoretically plausible, empirical evidence in extant survey research is very heterogeneous: often the wording effect for contrastive questions can be explained by (indicators of) attitude strength, but equally often attitude strength is found unrelated to the asymmetry. These heterogeneous findings might be due to differences in the operationalization of attitude strength.
In the current study, we tested the occurrence of wording effects for contrastive attitude questions once more for respondents holding strong and weak attitudes, in the context of political attitude questions in a Voting Advice Application. We manipulated the wording of 14 questions in one survey, which showed an overall wording effect in the direction already established by Rugg (1940). The wording effects were small compared to previous studies, which might be explained by the fact that a VAA is an opt-in survey with relatively highly motivated users.
We proceeded by investigating the role of attitude strength as a cause for the asymmetries found. Operationalizing attitude strength by measuring political interest showed no relation to the asymmetries. Following on to research in political decision making we made an alternative operationalization in terms of respondents’ degree of political sophistication. In our study, variation in users’ level of political sophistication was systematically related to the size or occurrence of wording effects. The higher the political sophistication, the smaller the overall wording effect - and the group of VAA users with the highest levels of political sophistication were not susceptible to the effects of question wording at all. This seems support for an attitude strength explanation after all, and also for more context specific measures of motivation and strength than used previously.
In quantitative social research the use of agree/disagree (A/D) questions (i.e. response categories are based on an agreement continuum), is a common and very popular methodological technique to measure attitudes and opinions of respondents. For instance, this question format is frequently used in the Eurobarometer, the ANES and the ISSP. Theoretical considerations, however, suggest that A/D questions require an effortful and intricate cognitive information processing. For this reason, a variety of survey scientists recommend the use of item-specific (IS) questions (i.e. response categories address the underlying dimension of attitudes and opinions directly) since they seem to be less burdensome. In the current study, we investigate cognitive effort (by means of response times and answer changes) and response quality (by means of survey satisficing indicators) associated with A/D and IS questions over PCs and smartphones. To investigate cognitive effort and response quality, we collected data using the Netquest access panel from September to October 2016 and applied a split-ballot design with four experimental groups defined by device type (PC vs. smartphone) and question format (A/D vs. IS) resulting in a 2-by-2 research design. The first and second group contained n = 300 respondents answering A/D or IS questions on PCs, respectively. The third and fourth group, in contrast, contained n = 400 respondents answering A/D or IS questions on smartphones, respectively. Although the data analysis is still imminent, we expect – against current theoretical considerations – to observe longer response times and more answer changes for the IS than for the A/D question format, irrespective of the device type. In addition, we also expect to observe higher response quality for IS than A/D questions.
What question characteristics are related to comprehension problems in political attitude questions? And what type of answering behaviour do people expose when they do not understand the question? We investigated these issues in the context of Voting Advice Applications (VAAs). These online tools provide users with a voting advice based on their answers to a set of about 30 political attitude questions. VAAs have become a central source of political information (see Garzia & Marschall, 2012) and research shows that the VAA voting advice affects the content of the vote cast (e.g., Andreadis & Wall, 2014). Therefore, it is of utmost importance to investigate to what extent VAA users understand the questions that lead to the voting advice and how they respond in case of comprehension difficulties.
Study 1 consists of cognitive interviews with 60 users, each one filling out 30 VAA statements prior to 2014 municipal elections in the Dutch Municipality Utrecht. The verbalizations of these respondents were recorded and categorized for several types of comprehension problems by two independent coders (Kappa/Kappa max between 0.58 and 0.98). Results show that VAA users encounter a comprehension problem for – on average - about 1 in 5 questions. About two-thirds of these are related to the semantic meaning of the question, covering difficulties with political jargon (e.g., 'dog tax'), or geographical terms (e.g., a specific street in Utrecht). One-third of the comprehension problems are related to the pragmatic comprehension-about the question. In these cases, the respondent does understand the literal meaning of all concepts in the question, but lacks contextual knowledge for providing a well-considered answer. Such pragmatic comprehension problems are often triggered by vague quantifying term in the question (e.g., taxes on housing should be raised), which make the users realize they lack knowledge about the current state of affairs ('How high is that tax now?'). In case of comprehension problems, VAA users often assume a certain question meaning, and hardly ever proceed by looking for information on the web. Nevertheless, a large majority of the respondents provides a substantive answer (often the middle option).
In Study 2, we investigated whether the question characteristics leading to comprehension difficulties in Study 1, lead to more neutral and no opinion-answers when statistically analyzed across a larger set of questions in a larger set of VAAs. We performed statistical analyses of all answers provided by 357,858 VAA respondents who used one of the 34 different municipal VAAs during the Dutch municipality elections in 2014. Results in Study 2 confirm that political jargon, geographical locations and vague quantifying terms are related to more neutral and/or no opinion-answers. Interestingly, there seems to be a relation between the type of comprehension problem and the type of answer provided: semantic meaning problems often result in no opinion answers, whereas pragmatic problems are related to neutral responses.