Tuesday 16th July
Wednesday 17th July
Thursday 18th July
Friday 19th July
Download the conference book
Cognition in surveys 1 |
|
Convenor | Dr Naomi Kamoen (Utrecht University) |
Coordinator 1 | Dr Bregje Holleman (Utrecht University) |
Cognitive research in surveys covers a wide range of approaches. In recent years, various models describing the cognitive processes underlying question answering in standardized surveys have been proposed. A lot of research is guided by the model of question answering by Tourangeau, Rips and Rasinski (2000). This model distinguishes four stages in question answering: (1) comprehension of the question, (2) retrieval of information, (3) deriving a judgement, and (4) formulating a response. In addition, there are dual-process models, such as the satisficing model proposed by Krosnick (1991). In this model, two groups of respondents are distinguished: those who satisfice, and try to do just enough to give a plausible answer versus those who optimize, and do their best to give a good answer.
Cognitive models such as the two described above, have many applications. For example, they help in understanding what is measured when administering surveys, and they provide a point of departure in explaining the wide range of method effects survey researchers observe. Also, cognitive theory in surveys is used by psychologists, linguists and other scholars to obtain a deeper understanding of, for example, language processing, the nature of attitudes, and memory.
Recently, cognitive approaches are not only used to describe processes of attitude measurement, but also to describe the ways attitudes are formed using standardized surveys. In this type of research, so-called 'decision aids', such as Voting Advice Applications, are studied. How do design choices in these decision aids affect users' answers, attitudes and behavioral intentions?
We cordially invite researchers addressing one or more of these topics to submit their papers to this session.
Panel surveys are used to measure change over time, but previous research has shown that simply asking the same questions of the same respondents in repeated interviews leads to overreporting of change. With proactive dependent interviewing, responses from the previous interview are preloaded into the questionnaire, and respondents are reminded of this information before being asked about their current situation. Existing research has shown that dependent interviewing techniques can reduce spurious change in wave-to-wave reports and thus improve the quality of estimates from longitudinal data. However, the literature provides little guidance on how such questions should be worded. After reminding a respondent of her report in the last wave ("Last time we interviewed you, you said that you were not employed"), we might ask: "Is that still the case?"; "Has that changed?"; or we might ask the original question again: "What is your current labour market activity?". We present evidence from a longitudinal telephone survey in Germany (n=1500) in which we experimentally manipulated the wording of the dependent questions and contrasted them with independent questions. We report differences in the responses collected by the different question types. We also test hypotheses about how respondents answer such questions, focusing on the roles played by personality, deliberate misreporting to shorten the interview, least effort strategies and cognitive ability in the response process to dependent questions. The paper provides evidence-based guidance on questionnaire design for panel surveys.
Asking survey respondents proxy questions has become more common in recent years due to progress in ego-centric network analysis. However, validation studies show that answers to proxy questions are very often incorrect and the reasons for respondents' mistakes are hardly understood.
Based on Krosnick's theory of survey satisficing (Krosnick 1991, 1999) we develop hypotheses on the conditions of incorrect answers to proxy questions. According to this theory, respondents can derive answers in different modes of information processing, between which the quality of answers varies considerably. The response mode is determined by respondents' motivation, their (cognitive) abilities and the task difficulty. Furthermore, satisficing theory predicts interaction effects between the different determinants of the respondents' strategies of information processing.
We test these hypotheses in the unique setting of an ego-centric network study in which proxy answers of ego about the alters' characteristics have been validated by interviewing these alters. Assuming that self-reports of the peer group represent the correct answers to the proxy questions; we show that satisficing theory predicts correctly the likelihood of correct survey responses. Higher motivated respondents, those with high information availability, and those with high cognitive abilities are more likely to give correct answers while an increase in the task difficulty decreases this likelihood. Furthermore, only two interaction effects were found to be relevant: Respondents' motivation as well as their cognitive ability are both significantly more relevant in the case of difficult than in the case of easy questions.
Methodological studies demonstrate that aided recall methods may enhance data quality in retrospective surveys, especially if applications are tailor-made to the population(s) at hand. However, usually little research is done after factors that are of importance in 'tailoring' such tools. This also applies to calendar methods that are being applied increasingly in both quantitative and qualitative studies and usually combine several aided recall tools.
This paper examines whether respondent characteristics and task difficulty factors can be used to tailor calendars. Since calendar methods usually add properties (like a grid, landmarks, instructions) to a question list, the ability of respondents to handle this 'complexity' might influence their effectiveness. Therefor it is examined whether the respondent's 'need for cognition', relevant sociodemographic characteristics and difficulty of the recall task are related to the impact of the calendar.
Data stem from a field experiment on a calendar application (N=233) that was embedded in a standardized telephone survey regarding consumer purchase behavior. To assess recall accuracy respondents' retrospective reports were compared with available purchase records.
Findings demonstrate that the calendar especially enhanced accuracy of date recall for those respondents with high 'need for cognition (NfC). Two different dimensions of NfC were found (need for 'thinking' and 'complexity') that interacted with the calendar method regarding 'direction of error' (telescoping) and 'amount of error' respectively. The calendar seems to be less suitable for respondents with low 'need for cognition'. Conclusions are drawn on the importance of these and other factors in tailoring calendars.
Respondents are more likely to disagree with negative survey questions (This book is bad. Yes/No) than to agree with positive ones (This book is good. Yes/No). In this study, we relate this effect to the cognitive processes underlying question answering (see Tourangeau, Rips & Rasinski, 2000). Using eye-tracking, we show that negative questions are somewhat more difficult to comprehend than their positive counterparts. Yet, respondents retrieve the same information from memory and they activate the same opinion about the attitude object. This suggests that contrastive questions measure the same underlying attitude, and hence, are equally valid. The main answering difference occurs during the final cognitive stage: for negative questions, it is more difficult to map an opinion to the responses than for positive ones. Therefore, it is likely that respondents give different answers to contrastive questions because response categories (yes/no) do not carry an absolute meaning, but are given meaning relative to the evaluative term in the question (good/bad).
While the dominant strand of survey psychology deals with the identification of general tendencies in survey response, recent work also attempted to identify and explain individual differences in the ways people respond to surveys. The proposed study falls in the latter category as it seeks to find factors that explain why some people respond more slowly or quickly to survey questions. The study approaches the questions from a behavior genetic point of view. Individual differences in survey response characteristics have been tied to dispositional traits, such as big 5 personality. These covariations suggest that inherited factors could, at least partially explain individual differences in survey response times. This study utilizes a classical twin design to identify the impact of dispositional characteristics, socialized factors and environmental drivers that are unrelated to one's familial surroundings on the time it takes for someone to respond to survey questions. Both identical and fraternal twin pairs completed a computer assisted self-administered questionnaire where the response times to each item, as well as the entire questionnaire was recorded. We utilize information on the identical and fraternal co-twin similarity and use a structural equation model to estimate the impact of additive genetic, familial and non-familial environmental factors in explaining individual differences in survey response times. Implications of the findings as well as possible specific sources of individual differences in survey response times are discussed.