Methodological challenges in health surveys |
|
Session Organisers |
Dr Marieke Haan (University of Groningen) Dr Yfke Ongena (University of Groningen) |
Time | Friday 2 July, 13:15 - 14:45 |
Health surveys generally include health behaviors and attitudes, risk factors, and other related variables such as socioeconomic status. When measuring these concepts researchers can be faced with several methodological challenges. In this session we discuss such challenges, that is: social desirability bias, as outlined in a systematic review by Zaal et al., choice of scale and label points, addressed in an experimental study presented by Haan et al., and answering questions on chronic conditions of self and family members, investigated in an interaction analysis study presented by Dykema et al. Another challenge is dealing with specific groups, such as older adults with a low socioeconomic status, Platzer et al. will present the benefits of using qualitative photo elicitation studies for this group. Finally, Ongena and Haan will discuss practical guidelines on recruiting participants for waiting room surveys which are commonly used for health care research.
Keywords: health survey methodology, questionnaire design, mixed methods
Ms Emma Zaal (University of Groningen) - Presenting Author
Professor John Hoeks (University of Groningen)
Dr Yfke Ongena (University of Groningen)
Socially Desirable Responding (SDR) - presenting a better version of yourself by providing biased self-reports - is inevitable in interactions about health behavior. SDR can create serious problems for health communication research that uses interviews and questionnaires. The phenomenon can significantly affect study outcomes by making respondents’ answers to self-report items diverge towards perceived social norms, and by increasing occurrences of non-response (leaving questions unanswered). For instance, people have a tendency to overreport healthy behavior such as exercising and to underreport unhealthy behavior like substance use. It is generally recognized that SDR is a serious concern that needs to be dealt with urgently. However, to date, we are still in the early stages of unraveling the mechanisms underlying SDR and the complex interplay between individual-, topic-, question wording-, and context characteristics that play a role in it. In addition, we currently do not know what the most optimal ways are to measure, predict and reduce SDR in health communication research.
Before attempting to control for, and reduce SDR in health communication, it is necessary to find out what is known about the factors leading to misreporting and non-response due to SDR. Therefore, a systematic review was carried out that collected and investigated recent research literature published on SDR. The review provides a shortlist of the factors that are currently known to predict SDR in health communication best. This set of factors includes individual characteristics (e.g., personality, (sub-)cultural orientation, gender, age), aspects of the specific topic under discussion (e.g., healthy behavior vs unhealthy behavior), question wording manipulations (e.g., formulating survey questions most optimal in terms of understanding survey items, respondents making enough effort to answer accurately, and steering respondents away from SDR), and context features (e.g., mode of administration).
In sum, the review provides 1) a general overview of factors related to measuring, predicting and reducing SDR; 2) theoretical and practical knowledge gaps that still exist in relation to studying SDR and 3) gives recommendations on what type of studies are needed to help capturing SDR and reducing its impact.
Dr Marieke Haan (University of Groningen) - Presenting Author
Dr Yfke Ongena (University of Groningen)
Mr Jeffry Frikken (University of Groningen)
Although psychometric properties in agree-disagree scales (ADS) have been advocated (Willits et al 2016), such scales may be more subject to acquiescence bias (Dykema et al 2019) and extreme responding (Krosnick and Presser 2010; Liu et al 2015). Through acquiescence and the repeated use of response categories, ADS may increase common method variance, thus artificially increase internal consistency. As a result, ADS are not necessarily better to measure the underlying construct. Questions that use a construct-specific scale, such as happy-sad, exciting-boring (CSS, also called item-specific, Höhne and Lenzner 2017), alert the respondent to the underlying response dimension. However, the intent of CSS may be less clear than ADS, since ADS remind respondents that an evaluation is being asked for, while CSS may give the impression that knowledge or facts are desired. Also, CSS are often designed as end-label only items (e.g., happy-sad) because they often do not allow for true midpoint labels. Therefore, not much is known about the effects of (fully-)labeled CSS versus ADS on data quality.
We conducted a 2*2 experiment in a questionnaire measuring attitudes towards artificial intelligence in medicine (CS versus AD and end-labeled versus fully-labeled scales) to study how the different scale versions affected data quality. Respondents were randomly assigned to one of the conditions. Data were collected in April 2020 using the LISS Panel which is representative for the Dutch population (n=2411).
Based on the literature we expected that: 1. The CSS will yield responses with lower reliability and lower acquiescence- and ERS-bias than the ADS, and this difference will be larger for the fully-labeled scale than the end-labeled scale, and 2. The CSS will be associated with longer processing time than the ADS, and this difference will be larger for the fully-labeled scale than the end-labeled scale. Our initial analyses show a higher internal consistency for ADS than for CSS, corroborating our hypothesis, though preliminary results do not consistently show acquiescence bias.
Dr Jennifer Dykema (University of Wisconsin-Madison) - Presenting Author
Dr Dana Garbarski (Loyola University)
Professor Nora Cate Schaeffer (University of Wisconsin-Madison)
Ms Tiffany Newman (University of Wisconsin-Madison)
Mr Cameron Jones (University of Wisconsin-Madison)
Professor Dorothy Farrar Edwards (University of Wisconsin-Madison)
Many studies of health ask respondents to report about the occurrence of chronic conditions -- such as asthma, heart problems, cancer, and diabetes -- for themselves and members of their family. Questions about chronic conditions are typically presented as a set or battery in which the individual items are formatted for a “yes” or “no” response and the full list of items is introduced by the phrase “has a doctor ever told you that you had …” or “has a doctor ever told a member of your immediate family that they have …” When designed for administration by an interviewer, the introductory phrase may be read for the first few items and then left to the interviewer’s discretion whether and when to repeat the information. The purpose of this study is to use interaction coding to examine data quality issues surrounding these important and seemingly straightforward questions. We code interviewer-respondent interaction from 375 computer-assisted telephone survey interviews designed to measure perceptions of barriers and facilitators to participating in medical research in which respondents were asked about the occurrence of nine chronic conditions for themselves and their family members. Respondents are from a purposive sample that strategically recruited sample members from underrepresented groups. From transcripts, we code behaviors that signal problems with the survey response process including question misreadings by interviewers as well as respondents’ uncodable answers, requests for repetition or clarification, and expressions of uncertainty. Preliminary results suggest that respondents frequently exhibit behaviors that indicate problems with comprehension of the terms in the questions and difficulties mapping their experiences onto the dichotomous yes/no responses. Further analysis will examine whether these patterns of difficulty vary by respondents based on their race and ethnicity and other sociodemographic variables. We also present results from a model of factors that predict when interviewers include or omit the introductory statement in order to shed light on variables that might influence the decision-making process of the interviewer. Results from this analysis of interviewer-respondent interaction are used to make recommendations about the design and administration of questions assessing chronic conditions in survey interviews.
Ms Feline Platzer (University of Groningen / University Medical Center Groningen) - Presenting Author
Professor Nardi Steverink (University of Groningen / University Medical Center Groningen)
Dr Marieke Haan (University of Groningen)
Dr Mathieu de Greef (University of Groningen)
Dr Martine Goedendorp (University of Groningen / University Medical Center Groningen)
Research focusing on older adults with a lower socioeconomic status (SES) is complex and faces several methodological challenges, such as high rates of non-response, dropout and cognitive difficulties associated with the literacy of the target group. Therefore, the use of cognitive tasks during research with lower SES older adults might affect the trustworthiness of the data. The use of visual tools are a proper asset to research with lower literacy groups because these tools support cognitive abilities such as abstract thinking. Photo-elicitation interviews are one of these tools and consist of adding a photograph in an interview. Although the use of photo-elicitation could be a suitable method to use with lower SES older adults, knowledge about the development and execution of photo-elicitation with this target group is scarce.
We developed a photo-elicitation study with the use of researcher-gathered photographs to gain more insight into the positive health perceptions of older adults with a lower SES. The focus of this study was aimed to explore whether older adults with a lower SES perceived a sense of control over their health. Because, to our knowledge, positive health perceptions of older adults with a lower SES have not been investigated before by the use of researcher-driven photo-elicitation interview technique, we developed a new research strategy. This strategy consisted of three phases: development, testing and execution. During the developmental phase, we gathered health-related photographs, developed the topic-list and discussed challenges during the development with a research team consisting of experienced researchers in social sciences. During the testing phase, we tested the photographs with the target group and professionals working with the target group with the use of interviews and focus group sessions. In the execution phase, we recruited members of the target group and conducted a total of 17 photo-elicitation interview. Participants were invited to reflect on ten health-related photographs that were selected after the development and testing phases. Participants were also invited to describe whether they perceived a sense of control over the situations as displayed on the photograph. Based on this study we summarized lessons learned during the three phases, and described recommendations for future research using photo-elicitation interviews with lower SES older adults.
This study served as a first step towards a better understanding of the methodological issues when using photo-elicitation interviews with lower SES older adults. During the presentation, we will focus on the description of the research strategy, the lessons learned during the process, and the recommendations for further research using photo-elicitation methods with this target group. By doing so, we hope to contribute to a better understanding of the methodological issues when using the photo-elicitation method with lower SES older adults.
Dr Yfke Ongena (University of Groningen) - Presenting Author
Dr Marieke Haan (University of Groningen)
Waiting rooms are commonly conducted to study patients’ attitudes, behaviors and other characteristics, but guiding literature on waiting room surveys for practitioners is scarce and outdated (Pirotta et al. 2002). In this presentation we synthesize practical guidelines from prior studies and our own experiences. We compare waiting room surveys with the similar approaches such as public intercept surveys and drop-off-pick-up surveys, and we discuss our experiences in a waiting room survey that was conducted at the Radiology department of the University Medical Center Groningen. Patients scheduled for a CT or MRI were approached by students working in pairs. One student unobtrusively filled out an observation sheet, noting the waiting room (CT or MRI), the date, time, gender of the approached person, the number of people present in the waiting room, and the number of caregivers accompanying the respondent. After the request to participate, both respondents and non-respondents were asked their year of birth. Out of 249 approached patients 208 (83.5%) were willing to fill out the questionnaire, and for 189 patients (75.9%) a complete questionnaire (with at least 75% of the questions answered) was available.
From our review, we conclude that waiting room surveys, though limited to patients and their care givers, can provide useful information on patients’ perspective on health care. Response rates in waiting rooms are usually high, but often not even reported. These surveys also allow for collection of para-data; i.e. relevant information in the circumstances of a request to participate in survey research, and behavior of surveyors can easily be controlled, or investigated in an experimental design.