Reflecting on Failed Research |
|
Chair | Professor Martin Weichbold (University of Salzburg ) |
Coordinator 1 | Professor Wolfgang Aschauer (University of Salzburg) |
Coordinator 2 | Professor Nina Baur (Technical University Berlin) |
Before conducting interviews thorough preparation and being acquainted with interviewing techniques are essential. However, how to prepare one (and others) for unexpected turns during interviews? When do we consider an interview as failure? What can we learn from such interviews and how can we incorporate these experiences in teaching others on interviewing techniques? Research has been concerned with pitfalls in interview conduction, e.g. interviewees having an agenda, thus giving narrations about something different than the interviewer asked; silence, mostly nonverbal communication and thereby less generated text for analysis.
In this paper pitfalls will be reflected which relate to emotional challenges during interviews. These pitfalls relate to interviewee aggression. Hereby one interview and one focus group setting from two different projects are used as examples. Reflection will be done using several steps to identify and generalise key insights. In the interview situation the interviewee turned aggressive and started to doubt the interviewer as well as the interview. In this case the relevance of power imbalance, age, gender and interviewer preparation are discussed. In the focus group situation the topic discussed referred to being a second generation migrant and what resources and benefits derive from that. First participants were excited because of planning a meeting with first, second or third generation migrants. During the focus group discussion participants became disillusioned and frustrated because of feeling rejected from society and saw focus group moderators as representatives of society.
Analysis and reflection on challenging interviews start with one’s attitude towards the interviewee and the interview situation, e.g. power imbalance, respecting boundaries as well as the influence of gender and age, preparation before interviewing others, but also the influence of discussion topics on the interviewee or focus group participants. Knowledge and practice how to contain interviewee’s (and interviewer’s) emotion are necessary. Aggression during interviews can happen. In some situations it is possible to work with this aggression and to reflect on it during the interview/focus group, thereby giving the possibility to express such feelings. In other situations it has to be considered to adjust the interview to be able to part in good terms. Analysing and reflecting on interview situations are an essential part in the research process. Challenging interview situations are good examples to teach students about pitfalls and how to cope with these.
Since its introduction in the 1950s, party identification has become one of the most used key concepts in election studies. In the original notion, PID denotes a long-standing, psychological affiliation with a political party (Campbell et al., 1960). It is usually measured with a single-item question that shows serious flaws as most standard questions focus on the affective dimension of PID and do not allow to measure multiple identifications (Weisberg, 1999). Several authors have already tried to introduce new measures for PID by adapting established instruments from social identity research (e.g. Greene 1999; Green et al. 2002; Bankert, Huddy, and Rosema, 2016). However, as these instruments consist of several items, they need considerably more survey time, especially when they are asked for all major parties to tap multiple identifications.
Currently, the single-item social identification (SISI) measure was introduced by Postmes et al. (2013) that is supposed to reliably measure in-group identifications. It was further validated by Reysen et al. (2013). Adapting SISI for the measurement of PID (SISI-PID) would allow the introduction of a valid, social psychologically founded measure that could track multiple identifications, but would not need as much survey time as the previous attempts that relied on larger scales.
The SISI-PID was first included in a German online survey (November 2013, N=1,000) that had quotas based on age and state, according to the German Microcensus. The first results were promising, about 68 percent of all respondents were classified as party adherents based on the standard question as well as the new measure. A moderate correlation was found between both measures (r=,45***) that makes sense from a theoretical point of view as the German standard question often taps mere party sympathizers.
Second, the SISI-PID was included in two waves of the GESIS panel (June 2015 and 2016, N=3620), a probability-based mixed-mode access panel that is representative for the German population. However, results for the share of party identifiers show considerable differences: Whereas about 81 percent of respondents are classified as party adherents based on the standard question, only 40 percent can be classified by the new measure (r=,26***). This difference for the share of party identifiers cannot be explained by changes in survey mode as the share for online participants is even lower than for offline participants (38 to 45 percent).
As PID is usually the strongest predictor for vote choice, it is pointless to propose a measure that lowers the share of adherents below 50 percent when the standard question finds much larger amounts of party adherents. In this paper, I aim to find reasons for the difference in the share of adherents between surveys. One possible reason could be that the results from the SISI-PID are easily affected by short-term factors such as election periods. As the results of the second wave will be release in December 2016, I will be able to tell if these difference are consistent.