Sensitive Questions in Surveys - Drawbacks and Problems When Applying Indirect Questioning Techniques |
|
Session Organisers | Miss Julia Jerke (University of Zurich) Dr David Johann (University of Zurich) |
Time | Tuesday 16th July, 14:00 - 15:30 |
Room | D01 |
Social Desirability Bias (SDB) is well-documented and frequently discussed in survey methodology leading to non-response as well as under- or over-reporting of certain characteristics. Over the past decades, researchers in sociology, psychology and related fields have developed indirect questioning techniques, such as the Crosswise Model and other randomized response techniques, the Item Count and Item Sum Techniques, to tackle this bias. All of these methods share that the inclusion of an anonymization procedure that protects the respondent in one way or another, enabling researchers to estimate the prevalence of the crucial characteristic in the sample.
A series of studies conducted over the last decades dealt with the applicability of these indirect questioning techniques, but yielded mixed results regarding success in tackling SDB. For example, these studies suggest that beneficial use of these techniques depends on the survey mode, the population and sample, general topic of the survey, or certain respondent characteristics. However, introducing randomization procedures to protect respondents, we know little about the respondents’ answering process in the field. It largely remains a black box. It is also difficult to understand respondents’ acceptance of these methods as it is difficult to assess whether or not respondents follow the instructions. Moreover, while RRTs (esp. the crosswise model) were initially evaluated very positively, current evidence indicates that the methods itself introduce unexpected and substantial bias related to complex and hard to process introductions and instructions. It is necessary to further encourage research that disentangles respondents’ answering process when indirect questioning techniques are used.
We particularly invite contributions to this session that address drawbacks and problems when applying indirect questioning techniques and that discuss possible solutions. This includes papers that systematically investigate under which conditions such indirect questioning techniques work and under which conditions they produce bias by design. We also welcome papers that test for compliance with and trust in indirect questioning techniques.
Keywords: survey methodology, sensitive questions, indirect questioning techniques, randomized response technique, item count technique
Ms Antonia Velicu (Universiy of Zurich) - Presenting Author
Survey respondents tend to present themselves in a more favorable light, especially when being asked unpleasant questions. This so-called social desirability bias introduced by sensitive questions often distorts survey responses. As a remedy research draws on indirect questioning formats that aim to protect respondents’ privacy and ensure their anonymity. Two prominent examples of such techniques are the Crosswise Model (CM) and the Item Count Technique (ICT). Both methods follow unconventional structures using randomization procedures, but that also require long, complex, and dense instructions. Previous research has suggested, that ICT and CM produce more truthful answers, at the same time they impose a higher cognitive burden on respondents. It is commonly believed that respondents fully understand and follow these more demanding instructions. Recent research suggests that this is not always the case, however. To further investigate this notion, I conduct a meta-analysis of the ICT and CM and analyze the introduction and instructions of these methods to answer two core questions: First, how do studies differ regarding the instructions? Second, how do characteristics (such as length and complexity of the instructions) affect the performance of these techniques? The results of this research have implications for researchers and practitioners working with these techniques, but also for the broader field measuring and analyzing sensitive characteristics in surveys.
Dr Sebastian Rinken (Instituto de Estudios Sociales Avanzados, Consejo Superior de Investigaciones Científicas (IESA-CSIC)) - Presenting Author
Dr Sara Pasadas del Amo (Instituto de Estudios Sociales Avanzados, Consejo Superior de Investigaciones Científicas (IESA-CSIC))
The list experiment, or item-count technique (ICT), is widely credited with facilitating true scores on sensitive questions. Prevalence of sensitive attitudes is computed by comparing the means scores of control (N items) and treatment groups (N+1). Since respondents only report a number, their assessment of any particular list item remains opaque to observers. Thus, research participants perceive anonymity to be guaranteed.
However, much depends on how exactly the experiment is executed. A socially undesirable attitude is unlikely to be reported truthfully when a respondent perceives all control items unfavorably (ceiling effect) or favorably (bottom effect). Zigerell (2011) argues that distortions may arise even with regard to balanced lists, due to concerns about even remote associations with the sensitive item. However, there is little empirical evidence of such deflation effects.
We report results from a list-experiment embedded in a mixed-mode probability-based panel survey on anti-immigrant sentiment (AIS) administered in 2016 by the Institute for Advanced Social Studies (IESA-CSIC). In a list of social groups, immigrants were the sensitive (treatment) item, while bankers, obese people, and obsessive gamblers served as controls; respondents were asked toward how many of the listed groups they felt antipathy. For comparison, the questionnaire also contained a direct AIS gauge.
At first glance, the experiment worked fabulously: ICT generated an AIS estimate almost twice as high as obtrusive measurement. Yet when cross-tabulating both kinds of results, the most immigrant-friendly respondents (as per obtrusive measure) in the treatment group were found to have reported artificially low item counts, thus producing negative AIS estimates. By striving to prevent even the remotest of possibilities of being associated with AIS, these respondents deflated the overall AIS estimate.
These data demonstrate beyond reasonable doubt that ICT does not automatically eliminate social desirability bias.
Professor Rainer Schnell (University of Duisburg-Essen)
Dr Kathrin Thomas (Princeton Univeristy) - Presenting Author
Dr Marcel Noack (University of Duisburg-Essen)
Previous research has shown that Randomized Response Techniques frequently produce "better" estimates than direct question formats applying the more-is-better assumption. One method that gained popularity over the past decade is the so-called Crosswise Model. As many studies using the Crosswise Model are based on student or other non-probability samples, it remains open whether the potential success of the technique can be generalized to probability samples of general populations. We empirically test potential subgroup effects studying the performance of the Crosswise Model in estimating undeclared employment in private households in Germany. The survey data are based on a probability sample of the general population provided by the German Socio-Economic Panel using an interviewer administered face-to-face mode of data collection. While the results suggest differences in the estimates using the Crosswise Model and a direct question format, we also find subgroup differences looking at education and income. Our findings raise concerns to what extent the Crosswise Model is actually able to enhance respondents' perceptions of privacy protection and, ultimately, to better measure sensitive behavior when general populations are concerned.
Mrs Shu-Hui Hsihe (Academia Sinica) - Presenting Author
In survey research with sensitive questions related to drug abuse history, homosexual activities, abortion experience etc., the randomized response technique (RRT) proposed by Warner (1965) enables researchers to enhance respondents' cooperation and to collect data indirectly such that nonresponse and social desirability bias can be reduced. Unfortunately, many studies on sexual orientation identity focus on using self-report direct questioning. So far, less is known about sexual orientation identity and few have used RRT. Focusing on Taiwan, a liberal and economically developed society, a face-to-face interview using the multi-level randomized response technique (MRRT) developed by Hsieh et al. (2018) to collect sexual orientation identity data. We scrutinize the tendency of sexual orientation identity of respondents aged 20 years or older in Taiwan across subgroups by gender, age, educational, marital status and region.
Ms Melike Saraç (Hacettepe University Institute of Population Studies) - Presenting Author
Professor İsmet Koç (Hacettepe University Institute of Population Studies)
Women generally have a tendency to underreport or misreport their abortion experiences mainly because abortion is considered a sensitive issue due to cultural, religious, political or other reasons in many countries across the world. Turkey, where induced abortion has been increasingly considered a sensitive issue, with the impact of intense statements against induced abortion by influential politicians on religious grounds, and with the implementation of a hidden prohibition of the practice of induced abortion, especially in the public health facilities in recent years, is not an exception on this issue. This study focuses on the extent to which induced abortion is misreported in Turkey for 1993 and 2013 survey years. This paper uses an indirect technique, a probabilistic classification model, which follows the preceding months of the terminated pregnancy of women, in order to classify the pregnancies accurately and then compare them with the survey results. The findings suggest that (1) the misreporting level of induced abortions has increased from 18 to 53 percent from the years 1993 to 2013; (2) women from lower segments of the society appear to misreport their induced abortion experiences much more than those in higher segments of the society. Considering misreporting as a special type of underreporting induced abortions, the incidence of misreporting was influenced by social desirability bias in Turkey as a consequence of the highly sensitive nature of the phenomenon as well as its cultural, religious and political considerations. In other words, the remarkable increase during the last decades in the misreporting of induced abortions may be associated to a great degree with the prevailing political environment and social stigmatization against induced abortions in Turkey. This calls detailed methodological studies in detecting the reasons behind the underreporting of induced abortion with the use of quantitative and qualitative techniques.