Asking sensitive questions |
|
Session Organiser |
Dr Mirjam Fischer (University of Cologne) |
Time | Friday 16 July, 13:15 - 14:45 |
Asking sensitive questions in survey settings comes with unique challenges, which are hard to anticipate at times. This session incldues research on best practices, measurement and consequences of asking sensitive questions in surveys.
Keywords: sensitive questions
Dr Shelley Feuer (U.S. Census Bureau) - Presenting Author
Dr Stefanie Fail (Nuance Communications)
Dr Michael Schober (New School for Social Research)
It is well known that survey questions vary in their sensitivity. But the same questions may be differently sensitive for different respondents, depending on the norms they subscribe to as well as their actual behaviors (e.g., questions about alcohol may not be sensitive for a nondrinker). This paper reports on a method for empirically assessing the extent to which specific questions and response options are considered sensitive (embarrassing or socially undesirable) at a particular moment or among a particular population. To test the method, online ratings of the sensitivity of survey questions and potential responses from large-scale US government and social scientific surveys were collected from two samples of 100 US respondents. Participants rated how embarrassed (‘not at all embarrassed’, ‘somewhat embarrassed’, or ‘very embarrassed’) they thought most people would be to answer each question and give each response option during an interview. Across both samples, there was no survey question that at least someone did not find potentially sensitive and there was no question that everyone thought was sensitive, but there was substantial agreement in the percentage of respondents who judged particular questions and responses to be somewhat or very embarrassing for most people to be asked or to answer. Results demonstrate clearly that sensitive questions can have nonsensitive responses—and that nonsensitive questions can have sensitive responses (e.g., going to the movies too often). Depending on the threshold one picks for judging a question or response as sensitive (e.g., 40%, 50%, or 60% of respondents think most people would find it embarrassing to be asked or answer), different questions and responses emerge as sensitive. Empirically assessing question and response sensitivity rather than relying on researcher judgment may be useful, given that the profile of what is sensitive changes based on different thresholds.
Professor Paula Fomby (Institute for Social Research, University of Michigan) - Presenting Author
Professor Katherine McGonagle (Institute for Social Research, University of Michigan)
Professor Narayan Sastry (Institute for Social Research, University of Michigan)
We summarize survey data collection on adolescents’ and young adults’ self-reported sexual orientation and gender identity in the 2019 Child Development Supplement (ages 12-17 years) and 2019 Transition into Adulthood Supplement (ages 18-28 years) to the US Panel Study of Income Dynamics (PSID). To our knowledge, these are the only data on sexual orientation and gender identity recently collected from US adolescents and young adults in the context of a nationally-representative, longitudinal study. These survey data provide a resource for studying the contemporary relationship between young people’s sexual orientation and gender identity and outcomes such as health, social support, status attainment, and family formation. We will describe the collection of information on sexual orientation and gender identity in the context of a mixed-mode panel study, including questionnaire development, the evaluation of mode choice, effects of mode on data quality, and concordance of data estimates with external data sources.
Data
PSID is a household panel study with a genealogical, intergenerational design. The study began in 1968 to investigate the determinants of entry into and exit from poverty in a sample of 4,802 families. It continues to the present day as a biennial interview administered to respondents descended by birth or adoption from original householders or who are members of families added to PSID through periodic immigrant refreshers. The most recent wave of data collection was completed in 2019 with almost 10,000 family households.
PSID fields two youth-focused supplemental studies that included questions on sexual orientation and gender identity for the first time in 2019:
• The Child Development Supplement (CDS) focuses on the well-being and experiences of minor children (age 0-17 years). In 2019, adolescents age 12-17 years completed a computer-assisted telephone interview with a trained interviewer and were then transferred to a computerized survey instrument using interactive voice response (IVR) technology to administer questions on sensitive topics, including sexual orientation and gender identity (N=1283).
• The Transition to Adulthood Supplement is a survey interview completed with young adults age 18-28 years in PSID families biennially. In 2019, it was conducted as a mixed-mode interview, with 80 percent of respondents initially pushed to a self-administered web questionnaire (N=2404) and the ring-fenced remainder assigned to computer-assisted telephone interview (N=560). Questions about sexual orientation and gender identity were included in both modes.
Presentation topics:
• Association of sexual orientation and gender identity with survey outcomes such as fieldwork effort (number of interviewer attempts, days in the field) and mode differences in response choice, nonresponse, and survey breakoff
• Wording and placement of questions on sexual orientation and gender identity
• Self-reported prevalence of sexual orientation and gender identity minority status by age, sex assigned at birth, and other demographic correlates across CDS and TAS
• Comparability of prevalence estimates to national sources including the National Survey of Family Growth (ACASI, 15-28 years), National Health Interview Survey (CATI, 18-28 years) and Add Health (12-28 years).
Dr Zeina Mneimneh (University of Michigan)
Ms Julie de Jong (University of Michigan) - Presenting Author
Dr Yasmin Altwaijri (King Faisal Specialist Hospital and Research Centre)
The presence of a third party in face-to-face interviews constitutes an important contextual factor that affects the privacy setting of the interview and potentially alter the interviewee's responses to culturally sensitive questions (Aquilino, 1997; Casterline and Chidambaram, 1984; Mneimneh et al., 2015, 2020; Pollner and Adams, 1994). While many surveys require their interviews to be conducted in a private setting, a significant proportion of interviews (typically in the range of 30-40 percent) are reported to be conducted in the presence of a third party. Information about third party presence is usually collected through observations made and record by the interviewer. Our recent work has shown that interviewers significantly vary in the rate of their private interviews (i.e. absence of a third party during the interviews); while some interviewers report high rates of privacy among the interviews they administer, others report low rates of privacy (Mneimneh et al., 2018). Yet, to what extent such interviewer variation is driven by “true” differences in the rate of private interviews or by measurement error in the quality of the observations is an open question. Data on third party presence is most commonly recorded at the end of the interview. However, for long interviews, recording whether someone was present during any section of the interview right after the interview is done might suffer from recall issues. Is it possible then that section-specific privacy measures, where the interviewer collects such observations right after each questionnaire section, have better data quality and show less interviewer variation than end-of-interview measures? And how would the two types of observations differ in their effect on reporting sensitive outcomes?
This paper explores these research questions for the first time using data from a national mental health survey conducted in the Kingdom of Saudi Arabia. A total of 4000 face-to-face interviews were completed using a computer assisted personal interviewing (CAPI) mode. Interviewers were required to record their observations regarding the presence of a third person at the end of several questionnaire sections throughout the interview, in addition to recording this information about the overall presence of a third person at the end of the interview. We use these two types of observations and measure the contribution of interviewer variation to these estimates, as well as compare their predictors. We then use two variables measuring two sensitive topics – one attitudinal, the other behavioral – as case studies to examine the differential impact of incorporating the two types of observational data in analyses. Given the difficulty in achieving interview privacy across all interviews, understanding the quality of privacy measures is essential if they are to be included in substantive models. This paper sheds light on the quality of such measures and provides recommendations for how to improve their measurement properties.