Particpant consent to data linkage |
|
Session Organiser |
Dr Ting Yan (Westat) |
Time | Friday 23 July, 15:00 - 16:30 |
It is not uncommon for survey respondents to encounter requests to link to other data sources (e.g., administrative records, social media data, biodata, and sensor data and so on). Respondents are also increasingly being asked to have some of their information collected passively (such as their geolocations tracked by GPS on their mobile devices). This session discusses important issues surrounding the process of asking consent to data linkage and to passive data collection. Papers included in this session examine the following topics:
-impact of wording and placement of the consent language
-using priming to increase respondents’ likelihood to consent
- impact of mode and device on consent
-individual respondent’s consent decision-making process
-a meta-analytic review of literature on consent
-impact of smartphone usage behavior on willingness to agree to passive data collection
These papers have important practical implications for practitioners and researchers.
Keywords: consent, data linkage, passive data collection, mode, device, wording, placement
Miss Anne Elevelt (Utrecht University) - Presenting Author
Dr Vera Toepoel (Utrecht University)
Dr Peter Lugtig (Utrecht University)
The widespread adoption of digital technologies is expanding opportunities for survey researchers to enhance survey research with other data sources, such as administrative records, social media data, biodata or sensor data. A crucial element enabling successful research with linked datasets, however, is to obtain respondents’ consent in order to avoid significant amounts of missing data and to minimize the risk of consent bias. Unfortunately, there is still a large variability in consent rates among different studies, which cannot be fully explained and which
hinders the full exploitation of data linkage potential. Our aim is to improve our understanding of the circumstances that lead to low or high consent rates.
To achieve this goal, we conducted a systematic review and metaanalysis to identify modifiable aspects of the consent to the data linkage request that influence consent rates. A systematic literature search (following the PRISMA guidelines) of six databases yielded 45 eligible manuscripts. An inventory of all conducted experiments in these
manuscripts revealed a large variation in the aspect of the consent question covered. We performed a network meta-analysis for the two most-covered aspects (i.e., sponsorship and question wording). All other categories were systematically reviewed. Our results show that how researchers ask consent questions matters greatly for the achievement of high consent rates. Sponsorship, incentives, mode, position, relevancy, study duration and opt-out also affect consent rates. The results from this study can be useful in designing future consent protocols.
Dr Jonathan Burton (University of Essex)
Professor Mick Couper (University of Michigan)
Professor Thomas Crossley (European University Institute)
Professor Annette Jäckle (University of Essex) - Presenting Author
Dr Sandra Walzenbach (University of Konstanz)
The existing literature on survey respondents’ consent to data linkage contains a number of puzzling findings. Empirical correlates of consent are inconsistent. Individual survey respondents appear to have a latent ‘willingness to consent’ when multiple consent requests are asked within one interview. However, when multiple requests are asked over time in a panel survey there is less evidence of a latent willingness to consent and many respondents who decline to consent reverse this decision if asked again at a later date. Efforts to increase consent rates through experimental manipulation of the requests have produced mixed results. Perhaps more worrying, comprehension of linkage requests appears to be poor. Overall, the process by which a particular survey respondent in a particular context does or does not provide consent is poorly understood.
Our goal in this paper is to open the “black box” in an effort to understand how individuals make consent decisions and – more explicitly – how informed consent happens. The consent decision is necessarily made within a limited time frame (a survey or interview), and with incomplete information. We hypothesize that survey respondents primarily use heuristic decision processes to make the consent decision, but that these processes are heterogeneous across individuals and contexts, differing in the amount and nature of the information that is used in making the decision; and that different heuristic decision processes are associated with differences in consent propensities and comprehension.
To explore these hypotheses, we implemented experimental data collection in the Understanding Society Innovation Panel (n~2,900) and in three different surveys (with n’s ranging from ~2,000 to 5,700) administered to members of an online access panel in the UK. Respondents were presented with a request for consent to link the survey to tax record data. Additional information was collected both from survey paradata and follow-up questions, including how the consent decision was made.
We find that respondents self-report different decision processes, some of which are more systematic (considering the consequences of consent) while other suggests the use of less or different information (for example, based on “trust”, “gut-feeling”, or “habit”). These self-reported decision processes are corroborated by markers of effort: those who report a more systematic consideration of the request take longer to respond. Self-reported processes are also associated with outcomes. In particular, more “systematic” decision processes are associated with higher consent, greater comprehension and greater confidence in decisions, as are decision processes that focus on trust in the survey organization or data owner. Conversely, decisions described as “gut-feeling” or habitual are associated with lower consent and lower comprehension. Self-reported process is weakly predicted by background characteristics and modestly affected by our survey design manipulations. These findings point to an explanation why attempts to achieve higher consent rates and more informed consent by providing additional information tend to have little success: if many of those who withhold consent are using very rapid and unreflective decision processes, additional information is unlikely to be incorporated into their decision.
Professor Annette Jäckle (University of Essex)
Dr Jonathan Burton (University of Essex)
Professor Mick Couper (University of Michigan)
Professor Thomas Crossley (European University Institute)
Dr Sandra Walzenbach (University of Konstanz) - Presenting Author
Survey data are increasingly being linked to administrative records to maximize the value of the data while minimizing respondent burden. But relatively little is known about how best to word and format such consent requests. In this paper we describe the results of a series of independent experiments designed to better understand how the design of the consent request can affect their decision, their understanding of the linkage process, and their confidence in the decision made. Specifically, we experimentally varied: 1) the wording of the request (comparing an “easy” version with simplified text to a “standard” version), 2) placement of the consent request in the survey (early versus late), 3) an opt-out version (i.e., “Press ‘next’ to continue” or explicitly opt out) versus the standard opt-in consent question, 4) offering additional information (i.e., adding a third response option “I need more information before making a decision”), and 5) a trust priming experiment focusing on the data holder.
Some of the experiments were conducted in the Understanding Society Innovation Panel (wave 11, 2018, n~2,900) while others were conducted in three different surveys (with n’s ranging from ~2,000 to 5,700) administered to members of an online access panel in the UK, both as part of a larger project focused on understanding and improving informed consent to administrative data linkage among survey respondents. While the primary outcome of interest for the above experiments is the rate of consent, secondary outcomes include objective understanding of the consent process, subjective knowledge of the process, and confidence in the decision. We also examine the effects of the experimental manipulations on time taken to respond to the consent request and whether respondents consulted additional materials describing the process of consent.
Briefly, we find that respondents are less likely to consent to data linkage if the wording of the request is difficult and asked late in the questionnaire. Position has no effect on consent if the wording is easy; wording has no effect on consent if the position is early. However, easy wording of the consent question increases objective understanding of the linkage request. We find that making consent the default option did not increase consent rates, while offering additional information on the linkage process decreased consent rates. Finally, those exposed to a trust prime had higher rates of consent. In this paper, we describe the results of these experiments in further detail, and frame our findings in the broader context of the research on understanding and improving data linkage consent.
Dr Olga Maslovskaya (University of Southampton) - Presenting Author
Mixed-mode surveys and online mode of data collection are becoming more popular as data collection paradigm is changing in social surveys. More social survey data are collected online as Internet access and mobile device ownership become more universal while mixed-mode surveys help reduce survey costs.
There is an increasing demand for social survey data to be linked to administrative sources of data as well as to social media data as linked data sources allow obtaining unique insights into various phenomena investigated in social research.
Mixed-device online surveys allow respondents to use different devices for survey completion and preliminary work by Maslovskaya et al. (2019) demonstrated that there is a higher likelihood to give consent to data linkage to HMRC data by respondents using laptops and desktops in comparison to those using smartphones in the context of Understanding Society Innovation Panel (IP) Wave 9. There are potential explanations for how the device might affect consent to data linkage: 1. laptops and smartphones might be used in different contexts which might impact on respondents’ privacy perception; 2. respondents might have different perception of security of their data when using different devices.
Jackle and colleagues (2020) conducted an experiment which compares web and face-to-face modes of data collection in consent to tax data outcome using Understanding Society IP Wave 11 data. Also, Wenz and colleagues (2020) compared face-to-face and web modes of data collection and consent to linkage of survey data to Twitter data in the context of Understanding Society IP Wave 10. However, not much is known about consent to data linkage and differences by devices in mixed-device online surveys in the UK yet.
This paper addresses this gap in knowledge by analysing consent to data linkage to tax records as well as consent to linkage of survey data to Twitter data in the UK mixed-device online surveys.
Understanding Society IP Waves 10 and 11 as well as main survey Waves 8-10 will be used for the analysis. This analysis builds on preliminary work conducted by Maslovskaya et al. (2019) and also extends it to two different contexts: tax record data and Twitter data contexts. Issues of potential endogeneity will be investigated and discussed.
The results from this analysis will be helpful for redesigning strategies of obtaining consents to data linkage in the context of mixed-device online surveys.
Ms Julneth Martinez Atencio (University of Neuchâtel) - Presenting Author
Dr Caroline Roberts (University of Lausanne)
Dr Jessica Herzing (University of Bern)
Professor Daniel Gatica-Perez (EPFL and Idiap Research Institute)
The proliferation of new technologies, especially smartphones, has opened the door to innovative opportunities to improve survey data collection by combining it with new types of data. Access to the native components of smartphones through mobile applications allows the gathering of data which does not need the active involvement of the user (i.e. passive data collection). However, the low levels of willingness to participate which translate into low response rates have been a common and fundamental problem that needs to be solved in order to take advantage of this technology in survey research. Understanding the variables that influence willingness is, therefore, key. Using data from a methodological experiment embedded in a probability-based sample survey of the general population conducted in Switzerland in 2019, the present study investigates the extent to which stated willingness to agree to passive data collection in research on mobile devices varies as a function of respondents’ smartphone usage behaviours and attitudes. The study uses a mix of exploratory and inferential statistical methods to a) differentiate users based on the activities they complete on their smartphones, b) investigate what variables predict divergent smartphone usage patterns, and c) assess the extent to which smartphone behaviours are predictive of willingness to share three types of passive data with academic researchers, and if that influence is more or less important than that of respondents’ attitudinal characteristics (including attitudes towards the internet and data privacy concerns). Two main clusters based on smartphone usage behaviour are identified, that can be distinguished in terms of extent to which they involve users divulging personal data. Downloading apps, online banking and using a smartphone for browsing and posting content on social media sites are key behaviours distinguishing users in each cluster. Predictors of belonging to the more data risk tolerant cluster include gender, age, frequency of smartphone use and general concerns around online data privacy. However, cluster membership (i.e. respondents’ smartphone behaviours) have a less important influence on stated willingness to share passive data than respondents’ attitudinal characteristics. Future research should investigate whether these relationships hold in the same way for people’s actual participation in research involving passive data collection via a smartphone app.