All time references are in CEST
Sensitive Questions and Social Desirability: Theory and Methods 1 |
|
Session Organisers | Dr Felix Wolter (University of Konstanz) Professor Jochen Mayerl (Chemnitz University of Technology) Dr Henrik K. Andersen (Chemnitz University of Technology) Dr Justus Junkermann (University of Leipzig) |
Time | Wednesday 19 July, 11:00 - 12:30 |
Room | U6-02 |
Misreporting to sensitive survey questions is an age-old and notorious problem in survey methodology. Empirical evidence has shown that survey respondents tend to engage in self-protective behavior (e.g., by not answering truthfully) when it comes to questions on private issues, deviant behavior, or attitudes that do not conform with social norms (e.g. sex, health, income, illicit drug use, tax evasion or xenophobia). This leads to biased estimates and poor data quality. Although a large body of methodological literature addressing these issues does exist, many questions remain. Further, recent literature has called special questioning techniques such as the crosswise model or other randomized response techniques into question, for instance because they have been accused of generating false-positive bias.
This session aims at deepening our theoretical and practical knowledge with respect to why and when response- or social desirability bias occurs and how best to avoid it. In particular, we are interested in studies that (1) focus on explaining the psychological mechanisms that lead to misreporting on sensitive survey questions; (2) present conceptual or empirical research on special questioning techniques (e.g., randomized response and item count techniques, factorial surveys) aimed at mitigating response bias; (3) deal with statistical procedures to analyze data generated with special data collection methods; (4) address other topics related to social desirability and/or response bias, e.g., mixed methods of data collection, using “big data” and/or record linkage techniques, but also research ethics and data protection issues.
Keywords: Sensitive questions, social desirability, response bias, data collection techniques
Ms Franziska Quoß (ETH Zurich) - Presenting Author
Respondents may knowingly or subconsciously overstate their answers to align with a perceived stance of a survey, leading to social desirability bias. This problem could be exacerbated in a panel context with the danger of panel conditioning over time. We investigate the role of social desirability bias in a panel context based on the Swiss Environmental Panel (SEP), a Swiss panel study running since 2018 with two survey waves per year. Compared to other panel studies, the SEP is an interesting case as it focuses on one policy dimension, namely environmental issues. This gives researchers and policy-makers uniquely dense insights into this policy arena, but may increase the risk of social desirability bias in environmental items. We field two Item Count Technique experiments with sensitive items in different areas (travel and transport behaviour) both in the SEP and in a second Swiss panel study with a focus on mobility topics (Swiss Mobility Panel, SMP). This allows us to compare the level of social desirability bias between the two panels, but also between groups of respondents (long-term respondents versus panel refreshment). We also use our rich set of socio-demographic covariates that come from an administrative dataset to find out which groups of the Swiss resident population show higher or lower social desirability in the two panel studies.
Dr Georg Kanitsar (Vienna University of Economics and Business) - Presenting Author
Dr Katharina Pfaff (University of Vienna)
Even though homosexual lifestyles are increasingly accepted in Western societies, reports about homophobic hostility and anti-gay sentiments remain commonplace in football. This seems surprising considering that past work consistently points to improved attitudes and growing support for diversity among players, officials, and spectators. Here, we explore two potential explanations for why attitudes towards gay athletes improve, while simultaneously many people still give account of a homophobic climate pervading amateur and professional football: social desirability and pluralistic ignorance. To do so, we conduct an online survey among a football-affine and diverse sample in the UK (n=1.215). The survey finds that the extent of homophobia is small, but varies between generic attitudes and concrete practices as well as according to sociodemographic characteristics. Importantly, estimates from a list experiment do not significantly differ from the prevalence as measured by direct questions, providing no evidence for social desirability concerns. By contrast, second-order beliefs about homophobia substantially and consistently exceed attitudes, pointing towards pluralistic ignorance as the most likely explanation. We conclude by emphasizing the need for transparent communication and informational campaigns about the true extent of homophobic attitudes to correct for misperceptions among supporters and players.
Dr Joanna Syrda (University of Bath) - Presenting Author
Despite a large literature investigating spouses’ relative and absolute earnings and division of housework, findings remain mixed and consensus elusive. One potential reason behind this might lie in the theoretical and empirical approach, and namely not addressing agents’, and specifically respondents', heterogeneity. I take a new methodological approach to housework and gender by examining differences between males and females in how they respond to survey questions about both, own and spouse’s housework depending on wife’s relative income. Fixed effects regressions results using 1999-2015 waves of the US Panel Study of Income Dynamics dataset (n=6,017) show a significant asymmetric effect of gender norms on spouses’ answers. When wives are earning relatively higher income, thereby deviating from the gender norm, husbands report lower own housework hours and higher housework hours on their wives’ behalf. This is consistent with gender deviance neutralization theory. However, when wives report both spouses’ housework hours, the relationship follows bargaining-exchange model.
Much of housework research has and still does rely on self-reported housework data. Moreover, it is frequently a single individual that completes the survey instrument on behalf of the household. This work addresses both, the conceptual and methodological challenge. Moreover, it takes advantage of the empirical opportunity that come with such survey design, by investigating what can be learned from wife and husband survey answers about the division of housework juxtaposed against spousal relative income. This research builds on and connects two sets of theoretical and methodological literature, and empirical findings regarding gender and housework: (1) spousal differences in reported levels of housework hours, and (2) the relationship between housework and spousal relative income.
Mr Andreas Quatember (Johannes Kepler University JKU Lnz) - Presenting Author
When questions on sensitive subjects are directly asked in empirical research, data quality might suffer from a self-protective behavior of the survey units. After data collection, statistical methods such as weighting adjustment can only attempt to “repair” for the occurred nonresponse, but not for the false answers. The indirect questioning (IQ) designs of the randomized response (RR) models, the nonrandomized response (NRR) models, or the item count technique aim to ensure the respondent’s cooperation by a higher privacy protection.
However, in order to be seen as serious competitors to the problematic traditional direct questioning, these unusual alternatives have to be easy to understand by the respondents and simply to implement into all survey modes for the experimenters. Additionally, their theory has to be provided for general probability sampling including stratification and clustering because in fields, in which sensitive questions are asked, complex sampling schemes are often used.
For NRR models, which are transformations of original RR techniques, in which a direct answer on the sensitive item has never to be provided during the questioning process, Hoffmann et al. (2017) showed their higher comprehensibility among the IQ techniques of their study and a substantially increased perceived privacy protection in comparison with the RR methods. However, results of studies such as Wolter and Diekmann (2021) show how important it is that respondents adhere exactly to the procedural rules in order to avoid false process answers.
In the talk, two new NRR conceptions are presented along with suggestions for their practical implementations in questionnaires. These techniques are NRR versions of two well-known, accepted RR techniques, namely the “forced RR model” and the “contamination method.” The different NRR approaches are discussed with respect to the offered privacy protection, the possibility of “self-protecting answers,” and the estimation accuracy under general probability
Dr Sandra Walzenbach (University of Konstanz) - Presenting Author
Professor Thomas Hinz (University of Konstanz)
As a method that was designed to ensure anonymity and does not require a random device, the crosswise model was hoped to overcome some of the key flaws associated with Randomized Response Techniques when assessing sensitive topics in surveys. Many studies drew - and we believe that too many keep drawing - positive conclusions regarding the method.
The trouble is that these conclusions are based on the finding that a crosswise model on average yields higher estimates than a direct question. In contrast to these views, we argue that this is a bad indicator for data quality. The CM estimator is systematically biased towards 50% whenever a socially undesirable behaviour with low prevalence is assessed and respondents disobey the instructions. This fact leads most studies, including the existing
meta-analyses, to unjustifiably positive conclusions.
We present a validation study on the crosswise model, consisting of five survey experiments that were implemented in a general population survey. Our first crucial result is that in none of these experiments was the crosswise model able to verifiably reduce social desirability bias. In contrast to most previous CM applications, we use an experimental design that allows us to distinguish a reduction in social desirability bias from heuristic response behaviour, such as random ticking, leading to false positive or false negative answers.
In addition, we provide insights on two potential explanatory mechanisms that so far have hardly received any attention in empirical studies: primacy effects and panel conditioning. We do not find consistent primacy effects, nor does response quality improve due to learning when respondents have had experiences with crosswise models in past survey waves. We interpret our results as evidence that the crosswise model does not work in general population surveys and argue that the question format causes mistrust in participants.