It’s the Interviewers! New developments in interviewer effects research 2 |
|
Chair | Dr Salima Douhou (City University of London, CCSS ) |
Coordinator 1 | Professor Gabriele Durrant (University of Southampton) |
Coordinator 2 | Dr Olga Maslovskaya (University of Southampton) |
Coordinator 3 | Dr Kathrin Thomas (City University of London, CCSS) |
Coordinator 4 | Mr Joel Williams (TNS BMRB) |
It is well known that the presence of an interviewer can affect responses and thereby introduce variance and bias into survey estimates. For instance, some respondents tend to adjust their true answers towards social norms or specific characteristics of the interviewer in order to appear in a good light. When investigating these types of interviewer effects, survey research has mainly focused on interviewer socio-demographics and only a few studies have examined effects of not directly observable characteristics such as interviewer personality, attitudes and beliefs. Moreover, survey research lacks of insights on how interviewers' and respondents' interpersonal perceptions of each other affect respondent answers to related questions.
For this project, self-reports of 1,184 respondents and 114 interviewers as well as their mutual perceptions of each other were collected. Data collection was realized in the context of the 2015 wave of the Socio-Economic Panel Study Innovation Sample (SOEP-IS), a large-scale longitudinal face-to-face survey of households in Germany. Both respondents and interviewers were presented with the same questions covering a variety of political and social issues such as political party identification as well as attitudes towards abortion and legalization of drugs.
This presentation includes results on a) the effects of interviewers' own opinions on respondent answers, b) the nature and accuracy of interpersonal inferences, as well as c) their impact on respondents' self-reports. First results show that respondents and interviewers are able to infer each other's opinions and attitudes quite accurately. Moreover, the results reveal a strong association between a respondent's answer and a respondent's inference about his/her interviewer's opinion. This indicates that some respondents seem to adjust their true answers towards anticipated interviewer opinions.
Interviewer error is known as one of the sources of variability in face-to-face surveys. Kish (1962) reports that variance in respondents’ answers can be partially explained by interviewer clustering, with an intra-class correlation varying between 0.05 and 0.10. Interviewers behave differently in the interviewing process, e.g. when probing, and they create a particular atmosphere which influences the thought and answer process of respondents (Mangione et. al., 1992). Distortion in data collection due to interviewer error is composed of two types of effects: bias, “when there is a dominant and systematic effect of interviewers”, and variance, “when these systematic differences effects differ between interviewers” (Loosveldt, 2008, p. 215).
The objective of this study is to understand the extent to which interviewer error affects immigration attitudes. It will be done on ESS data by multilevel analysis: respondents (level 1) are clustered within interviewers (level 2). Another two levels could be added – regions (level 3) and countries (level 4) – to account for the variability that both these geographical, cultural and social levels could explain. Since the interviewer effect is higher for attitudinal than for factual questions (Schnell & Kreuter, 2005), the role of interviewer effect can be assessed on the rotating module D - immigration, including attitudes, perceptions and policy preferences – of ESS round 7 data from 2014. Certain sensitive attitudinal questions such as those asking whether immigrants generally take jobs away from workers in a country may be prone to social desirability bias; simply the presence of interviewers may cause a systematic error in respondents’ answers.
The main task is to isolate confounding indicators and potential sources of variability, zooming in on interviewer effect. Respondents’ socio-demographic profile, locality/rurality, area/region and country are some of the effects that could explain attitudes and perceptions towards immigrants and immigration. The focus will be in separating or disentangling these from interviewer effect and showing the magnitude of such error on various immigration attitudinal outcomes. In addition, the role of interviewer indicators (variables of level 2), such as age and gender, will be assessed further.
Understanding interviewer error across face-to-face surveys not only enriches the state of art but also helps in designing strategies to reducing such error.
How and why do observable interviewer traits, including interviewer gender and religious dress, affect survey responses and item non-response in the Middle East and North Africa? One potential cause of widely divergent survey findings on gender-sensitive issues in the Arab Barometer and other cross national surveys may be measurement or non-response bias stemming from observable interviewer traits. Using three nationally representatives surveys spanning the initial three-year post Ben Ali period in Tunisia—1,202 Tunisians in 2012, 1,220 in 2014, and 3,600 in 2015—this paper assesses the link between interviewer gender and responses to questions about women’s rights in the public and private spheres. Female interviewer gender affects the likelihood of favoring gender equality, but the nature and size of the effects depends on interviewer religious dress, as well as the type of rights. Secular-appearing female interviewers receive the most progressive responses for all questions, while religious-appearing males receive the most traditional. However, when asking about private issues such as Shari’a law in the family code, secular-appearing males receive more progressive responses than religious-appearing female interviewers, while for public issues such as women in parliament, the opposite is the true. The data offer strong support for social distance and ingroup loyalty across all respondent types and power relations theory for male respondents in conversations with female interviewers. Implications for reducing survey error and understanding gender relations in the Middle East and North Africa are considered.
Research has shown that interviewers can have important effects on respondent answers (Blom and Korbmacher, 2013; Davis, Couper, Janz, Caldwell, and Resnicow, 2010; Groves, 1989; Groves et al., 2009). Potential bias introduced by interviewer religious wardrobe on related survey items is of particular concern in countries facing religious and political upheavals such as those in the Middle East and North Africa region. For example, studies in this region have found that interviewers wearing Islamic (rather than secular) symbols and Islamic hijab (vs. no hijab) received increased reporting of religious attitudes either directly or through an interaction with respondent characteristics (Blaydes & Gillum, 2013; Benstead, 2014; Koker, 2009; Mneimneh et al., 2015). However, little is known about the effect of interviewer’s own attitudes. We have recently shown that an interviewer’s own religious attitudes affected respondent’s reported religious attitudes independent of interviewer religious wardrobe. The effect of an interviewer’s attitudes was as large as, and sometimes larger than, the effect of the interviewer’s religious wardrobe (Mneimneh et al., 2015). The literature, however, is lacking on explanations of the mechanism of these effects. Are interviewers mirroring the attitudes of the respondents they are interviewing or are they projecting their own attitudes on the respondents? Are the effects transmitted through potential side conversations about religious topics between the respondent and the interviewer?
Using recently available panel survey data from a second wave of data collected in Tunisia in 2015, this paper investigates these research questions by looking at interviewer’s attitudinal measures collected before the field work and contrasting their effects with interviewer measures collected after the field work. Moreover, observational measures on side conversations related to religious and political topics were collected, allowing for investigation of the potential mediating or moderating effects on the relationship between interviewer’s and respondent’s attitudes.