Sensitive Questions in Surveys – What Are Sensitive Topics? |
|
Session Organisers |
Dr Kathrin Gärtner (University of Applied Science Wiener Neustadt) Professor Wolfgang Aschauer (University Salzburg) Professor Martin Weichbold (University Salzburg) |
Time | Tuesday 16th July, 11:00 - 12:30 |
Room | D20 |
Gaining valid information about sensitive topics is an issue for survey research since its beginning. Over the years different methods have been developed and tested to encourage participants to respond in such surveys (avoiding unit nonresponse), to answer sensitive questions (avoiding item non-reponse) and to give honest answers (avoiding bias caused by false or socially desirable answers). Those methods include sampling strategies, framing and formulating items so that potentially problematic answers seem „normal“, enhancing the perceived privacy and also randomized response techniques that enable at least estimates for aggregates. However, as all those methods come with potential drawbacks, it seems reasonable to ascertain as precisely as possible in which cases such methods are really needed, i.e. which topics and items have to be considered sensitive. Besides classical examples such as sexual behavior, drugs, criminal and ethical problems and attitudes involving suicide and abortion, we also know from cross-national surveys that widely-used indicators such as income are highly sensitive and missing rates vary across countries. We also want to point out that people of various ethnic backgrounds, age groups or attitudinal characteristics may differ in what topics people consider sensitive. For this session, we particularly encourage submissions on findings about what topics are sensitive topics in which populations. We welcome studies using classical methods to detect sensitive questions and their impacts such as described above but we particularly aim for new concepts and new methods how to identify the sensitivity of questions among different societal or cultural groups.
Keywords: sensitive topics, non response, survey construction, sensitive questions
Dr Bettina Müller (Department of Sociology, LMU Munich) - Presenting Author
Dr Claudia Schmiedeberg (Department of Sociology, LMU Munich)
Which topics are considered “sensitive” may not only vary between groups, such as age groups or cultural/ethnic subpopulations, but also over time within a given sample population. In particular, respondents’ acceptance of sensitive questions may differ in the first wave of a survey compared to repeated participations in a panel.
Sensitive questions are prone to item nonresponse as respondents may find them too intrusive, regardless of the “true” answer, or have data confidentiality concerns. In a panel survey, respondents’ worries about the confidentiality of their data are likely to decrease as trust in the survey and the interviewer grows, and as they learn that information disclosure does not have any negative consequences. Furthermore, negative reactions that stem from feelings of intrusiveness may be less pronounced as respondents are already familiar with the “sensitivity level” of the survey content, which could possibly mitigate item nonresponse in later waves.
Using data from the German Family Panel pairfam, an individual-level sample of the birth cohorts 1971-1973, 1981-1983, and 1991-1993, we evaluate the effect of panel experience on refusals (“I don’t want to answer that”) to sensitive questions regarding topics such as sexual satisfaction, infidelity, and infertility. These questions are asked in the CASI section of the survey; interviewer effects are therefore assumed to be minimized. We apply fixed-effects regression to estimate intra-individual changes in response behavior, accounting for all time-constant measured and unmeasured confounding variables. Additionally, we investigate whether patterns differ by birth cohorts.
Our study aims to gain insight into the stability of respondents’ reactions to sensitive survey content over time. Evaluating the impact of survey experience on item nonresponse for questions that have been implemented since the first wave vs. questions integrated from wave two onwards provides practical guidance for questionnaire design.
Professor Aram Simonyan (Yerevan University, Kiel University)
Professor Peter Graeff (Kiel University) - Presenting Author
In our study, we analyse survey data about giving and taking bribes in the particular case of a country, which is classified by the Corruption Perception Index as a rather corrupt. For this, we refer to the Armenian population survey about corruption in 2010. Corruption, usually considered as a sensitive topic, implies that respondents either give social desirable answers or simply refuse to participate. Our suggestion is that in high corrupt countries, questions about taking and giving bribes are less sensitive due the presence of corrupt practices in everyday life. We also expect that in western countries the items of the Armenian population survey would be considered much more sensitive, triggering more controversial respondent´s reactions.
In order to find out about the gap between western and non-western standards of sensitivity, we conducted a qualitative study about respondents’ reactions with a sample of 15 Armenian and 15 German university faculty members. The results add to the assessment of the interpretation of our quantitative analysis with the Armenian survey data. In particular, we reviewed the factors, which determined the attitudes in the survey. One major factor determining the corrupt practices is the trust in (kin) people when corrupt deals are committed.
Mr Henrik Andersen (Chemnitz University of Technology) - Presenting Author
Professor Jochen Mayerl (Chemnitz University of Technology )
Social desirability (SD) describes the tendency of respondents to present themselves in a more positive light than is accurate and is a serious concern in surveys. If researchers are better able to understand the underlying mechanisms responsible for SD bias, they may be able to devise ways to identify and correct for it. One possibility involves determining whether it is more of a deliberate ‘editing’ of responses or an automatic, perhaps ‘self-deceptive’, act. Then researchers could potentially flag conspicuously fast or slow responses to improve data quality.
We outline dual-process-related theoretical arguments for both scenarios and test their plausibility. Specifically, we discuss two competing – but perhaps ultimately complementary – points of view. Esser (1990) describes SD as an automatic response to highly transparent and accessible normative expectations. Stocké (2004), on the other hand, sees SD as a deliberate, utility maximizing editing process motivated by desire to receive social approval.
Our analysis involves the use of fixed-effects multilevel models that enable us to control for unobserved differences between respondent- and item-characteristics while also examining cross-level interactions between the predictors at various levels. Specifically, we examine the classic respondent- (i.e. need for social approval) and item-related characteristics (i.e. trait desirability) associated with SD bias, as well as the speed at which the respondents gave their answers. Doing so allows us to observe under what circumstances the respondents tended to overstate positive characteristics as well as understate negative ones. We find evidence for SD as an automatic as well as a deliberate response behaviour.
Interestingly, the mechanism responsible for determining whether SD occurs automatically or deliberately seems to be whether the item content is desirable or undesirable. Desirable traits seem to elicit faster SD responses whereas undesirable traits seem to elicit slower SD responses.
Dr Yfke Ongena (University of Groningen) - Presenting Author
Miss Emma Zaal (University of Groningen)
Dr Marieke Haan (University of Groningen)
We provide results of several Fake Good Fake Bad experiments. In such experiments respondents are asked to intentionally give either socially desirable answers (fake good) or socially undesirable answers (fake bad). Significant differences between these two groups are used as an indication that the questions indeed are sensitive and thus vulnerable to social desirability effects. Our first experiment, in a web survey (n=215) on alcohol consumption, showed that variance of scores in the fake bad condition was systematically higher than in the fake good condition. Several possible explanations exist for this phenomenon. It may mean that respondents are less determined about what is socially undesirable than about what is socially desirable, it may mean that faking good is easier than faking bad, or it may mean that respondents interpret the faking instructions differently. In order to increase comprehension of the faking task, in our second and third experiment, we tested the effects of visual and oral instructions. The oral instruction that preceded administration did yield lower variance in the fake bad group, but not for all questions. Another cause of high variance in fake scores could be the different social settings that respondents take into account while determining the socially desirable or undesirable position. Therefore, in the fourth experiment, in a web survey we tested the effects of steering respondents towards a social context by means of pictures. However, results showed no difference in respondents' answers for the version with pictures that included social context (i.e., pictures of people consuming alcohol or drugs) versus a version that included no social context (i.e. pictures of alcoholic beverages or drugs). Finally, in a fifth experiment, we tested the possibility of allowing respondents to select multiple response options. The results showed that only a fraction of respondents selected more than one option, which may have been due to satisficing effects.
Dr Karin Wegenstein (University of Applied Sciences, Wr. Neustadt)
Mr Sebastian Kunc (University of Applied Sciences, Wr. Neustadt)
Dr Kathrin Gärtner (University of Applied Sciences, Wr. Neustadt) - Presenting Author
Two current developments encourage higher education institutions to reconsider practices of student data usage. First, the EU General Data Protection Regulation is designed to create a framework protecting individuals from practices that may offend privacy rights. Second, the digitalization of learning processes offers opportunities for implementing learning analytics systems. The study at hand aims at investigating the perceived sensitivity of performance based student data. We are thereby confronted with the problem of how to approach the collection of reliable data regarding the self-evaluation of performance.
We expect respondents to be hesitative in providing information about average grades. Hence, we test two different scales regarding their capability of providing a diversified and accurate insight in actual performance levels. First, respondents are asked to read statements that describe a certain attitude towards performance. They are invited to rank these statements according to whether or not they identify with the described attitude. Second, respondents are confronted with a set of contrasting statements about their performance and they are asked to allocate their self-estimated performance within this semantic differential scale.
The scale that provides the most reliable results is implemented in a survey assessing perceived sensitivity of performance data probing the following hypotheses: (1) The perceived sensitivity differs along data type and personality. (2) In parts, the variance of perceived sensitivity of performance data can be explained by predictors such as gender, level of studies, study field, and individual performance levels. These hypotheses are tested applying a set of multivariate statistical methods including two-way ANOVA and multiple linear regression analysis.
The results of the study are key to data management approaches and learning analytics. The presentation of the results testing scales for collecting data on the self-evaluation of performance provide guidance for the development of scales that address contents that respondents may be hesitative to reveal.