Researching Sensitive Topics: Improving Theory and Survey Design 2 |
|
Chair | Dr Ivar Krumpal (University of Leipzig ) |
Coordinator 1 | Professor Ben Jann (University of Bern) |
Coordinator 2 | Professor Mark Trappmann (IAB Nürnberg) |
This paper assesses two distinct yet compatible research techniques that are widely supposed to reduce social desirability bias, namely: (a) data collection via self-administered versus interviewer-administered survey modes, and (b) measurement via an indirect gauge (i.e., the item-count technique) versus an explicit questionnaire item. Substantively, the study aims to estimate the prevalence of anti-immigrant sentiment, a notoriously sensitive and bias-prone research topic. In contrast with the bulk of extant scholarship on attitudes toward immigration and immigrants, we consider virulent anti-immigrant sentiment (i.e., generalized antipathy against immigrants) to be qualitatively different from generic qualms about immigration’s impact and management, for example with regard to the labor market; however, we recognize that the prevalence of virulent animosity is prone to be underestimated by explicit questionnaire items. Thus, this paper explores research procedures that hold the promise of avoiding the pitfalls both of sprawling imputations of gratuitous hostility, on one hand, and its exceedingly narrow (for outspoken) measurement, on the other. Hence, we aim to minimize not only the potentially significant share of “false negatives” originated by the latter, but also the “false positives” incurred by expansive notions of prejudice.
Our dataset stems from a combined CAWI-CATI survey (N=1232) conducted in 2016, using a mix-mode probability-based panel recruited and maintained by the Institute for Advanced Social Studies (IESA), a unit of the Spanish National Research Council (CSIC). The questionnaire included an indirect measure of virulent anti-immigrant sentiment, obtained via the list-experiment (item-count technique), and an explicit gauge of the same focal construct. As predicted by extant scholarship, in the whole sample, the former yields a significantly higher estimate of out-group rejection than the latter, by a margin of seven percentage points. Although the size of that divergence is interesting in its own right, our research question derives from the dataset’s mixed-modes design. On the assumption that social desirability bias is due largely, or indeed primarily, to the interviewer’s perceived role as “representative” of moral norms such as tolerance and inclusiveness, explicit measures can be expected to originate significantly higher estimates of animosity, net of other factors, in the self-administered subsample than the interviewer-administered branch (H1). And on the assumption that the interviewer effect diminishes strongly, or even disappears, when unobtrusive gauges are employed, that mode differential should decrease, or even vanish, when using the item-count technique (H2). Since respondents’ sociodemographic profiles vary by survey mode, those gaps are not readily apparent in the raw data; their computation will therefore constitute the paper’s core results.
Abortions are known to be under-reported in surveys. Previous research has found a number of ways in which survey methodology may increase or decrease women’s willingness to disclose abortions.This paper estimates the extent of under-reporting in two nationally-representative population surveys by comparing the survey rates with routine statistics, in order to explore the ways in which survey methodology might influence reporting of abortion. Routine statistics on abortion in Britain are considered to be complete. Two National Surveys of Sexual Attitudes and Lifestyles, conducted in 2000 and 2010 (Natsal-2 and Natsal-3) from Britain are used. These two cross sectional surveys were conducted ten years apart and on the same population, but used different methodologies to collect data on abortion. They therefore enable a limited natural experiment to consider the effect of changing survey methodology on reporting of abortions. In Natsal-2, data on abortion was collected using a direct question: women were asked if they had ever had an abortion, and if so how many, and the time of their last abortion. In Natsal-3, data on abortion was collected using a pregnancy history module. Women were asked how many times they had ever been pregnant, and for each pregnancy in turn they were asked the outcome of the pregnancy and when it ended. There was no evidence of under-reporting in Natsal-2, which collected data on abortion using a direct question. The confidence interval of the abortion rate estimated from the Natsal-2 survey included the rate obtained from national statistics. There was evidence of under-reporting in Natsal-3 which collected data on abortion through a pregnancy history module. The confidence interval of the rate did not include the rate obtained from national statistics; 71% of abortions were reported. A direct question may be more effective in eliciting reports of abortion than a pregnancy history module.
Providing sound statistical information on the lesbian gay or bisexual population is a need to inform policy makers on disadvantage and discrimination suffered by sexual minorities (gay, lesbian and bisexual populations). However, obtaining good quality data is methodologically challenging, as sexuality is one of the most sensitive topics in surveys.
This paper compares the estimates on sexual identity from different protocols; first, it shows the estimated prevalence of the lesbian gay and bisexual population obtained with an indirect questioning method: the “Item Count” indirect questioning Technique (ICT). Second, it compares a protocol involving face-to-face interviewing with a show card (adopted by the Integrated Household Survey, HIS) with a Computer Administered Self-Interview protocol (adopted, among others, by the UKHLS) and with the estimates produced using the “Item Count” technique (ICT).
A slight variation to the Item Count technique (ICT) is implemented, to derive individual level estimates. Thus, within individuals, the estimates obtained with ICT and with direct questions are compared, to determine wich are the socio-demographic groups more likely to misreport in the direct question. Also, the potentilities of this variation to the standard technique are discussed, in terms of feasibility and ethical implications.
The analysis is based on experimental data collected in the UKHLS Innovation Panel. This is a nationally representative dataset of the UK population. The experimental allocation to treatment (ICT, UKHLS and IHS protocol) is randomized.
Results may inform survey practitioners and researchers on best ways to elicit sexual orientation in the UK and may inform data users on the quality of data elicited with the different protocols.