Researching Sensitive Topics: Improving Theory and Survey Design 1 |
|
Chair | Dr Ivar Krumpal (University of Leipzig ) |
Coordinator 1 | Professor Ben Jann (University of Bern) |
Coordinator 2 | Professor Mark Trappmann (IAB Nürnberg) |
The validity of responses to sensitive questions has been a topic in survey research for several decades. Within the context of sensitive questions, the effects of social desirability are generally the most often looked at type of response effects. Social desirability refers to the tendency of respondents to overstate positive behaviours or characteristics and understate negative ones (cf. Holtgraves 2004).
Various attempts have been made to assess the extent to which responses are a source of bias in survey results, and to develop ways to avoid having results coloured by social desirability. Besides classical survey methods such as anonymous interview settings (sealed envelopes), the implementation of need for social approval- and trait desirability-scales in surveys, some techniques designed specifically to encourage respondents to answer truthfully are the randomized response or the item count, faking instructions or bogus pipeline techniques, for example. However, doubts have been cast about the effectiveness of these techniques in eliciting more valid responses (cf, Wolter; Preisendörfer 2013 among others). Therefore, researchers have turned to other techniques to identify socially desirable responses as well as to gain a better understanding of why and how people answer in a socially desirable way. One such technique involves analyzing paradata collected about the survey process, often in the form of response latencies of answers. In this way, response latencies are used as proxies, working with cognitive information processing theoretical frameworks, to infer information processing modes.
So far, evidence is conflicted as to whether socially desirable responding is indicated by shorter or longer response latencies. This paper looks to contribute a better understanding response latencies and their application in identifying bias in surveys.
We concentrate on both respondent and item-related characteristics (need for social approval and trait desirability, respectively) in a multilevel regression analysis and attempt to use them to predict response latencies. On a theoretical level, we integrate several competing but ultimately compatible explanatory models of response behavior into a more general framework. The analysis is based on data collected in CASI surveys (n=550) in which respondents took part in groups in a controlled supervised survey situation.
Our findings indicate that it is important to differentiate between socially desirable and socially undesirable attitudes and behaviour as they seem to elicit completely separate types of responses. Clearly desirable attitudes and behavior elicit fast response times while clearly undesirable ones lead to generally slower responses. We link these findings to the theoretical and empirical debates about the use of response latencies as proxies for information processing modes and contribute to a more effective use of them in identifying response bias on the grounds of social desirability.
References:
Holtgraves, T. (2004): Social Desirability and Self-Reports: Testing Models of Socially Desirable Responding. In: Personality and Social Psychology Bulletin, Vol. 30, No. 2, 161-172.
Wolter, F.; Preisendörfer, P. (2013): Asking Sensitive Questions: An Evaluation of the Randomized Response Technique Versus Direct Questioning Using Individual Validation Data. In: Sociological Methods and Research, Vol. 00, Nr. 0: 1-33.
Despite its frequency in the U.S., abortion remains a highly sensitive, stigmatized and thus difficult-to-measure behaviour. Furthermore, underreporting is not random; some groups are less likely to report their abortions than others. Less is known about reporting of other pregnancy outcomes. Underreporting means that we have an incomplete, and possibly biases, picture of not only abortions but also pregnancies in the US. Research is needed to understand who is underreporting and why, and to assess the potential biases in pregnancy data in nationally representative surveys. The National Survey of Family Growth (NSFG) uses audio computer assisted self interviews (ACASI) to measure abortion, in addition to face-to-face (FTF) interviews, in order to elicit more complete reporting. We analyse data from the 2002, 2006-2010, and 2011-13 NSFGs to examine the effectiveness of the ACASI in improving reporting or abortion, and consider other factors which may influence the sensitivity of abortion reporting. We capitalize on reporting differences by pregnancy outcome (abortion, live birth, miscarriage), reporting mode (FTF v. A-CASI), retrospective reporting period (lifetime v. last 5 years), and time period (2002, 2006-2008, 2008-2010, 2011-2013). Reporting of abortions was higher using the ACASI, suggesting that privacy and stigma are important factors in women's willingness to disclose abortions. The ACASI elicited relatively more abortions among non-white women and low income women, suggesting that stigma may be felt differently by different groups. For all pregnancy outcomes, the ACASI elicited relatively more reporting where a five year, as opposed to a lifetime, recall period was used. Survey factors might affect different pregnancy outcomes in different ways, depending on their sensitivity and their salience. Across all outcomes, but most notably for miscarriages and abortions, reporting ratios increased between 2006-8 and 2011-13. This may reflect changes in sensitivity in reporting of miscarriages and abortions, in the effectiveness of the ACASI, or in willingness to take part in surveys. The ACASI may work differently across time, for different measures, and with varying survey context. Miscarriage also appears to be a sensitive outcome; this finding should be explored further.