All time references are in CEST
Complex measurements in online self-completion surveys |
|
Session Organisers | Dr Cristian Domarchi (University of Southampton) Professor Lisa Calderwood (UCL Centre for Longitudinal Studies) Mr Curtis Jessop (National Centre for Social Research) |
Time | Tuesday 18 July, 09:00 - 10:30 |
Room |
Accurately capturing complex phenomena is critical for the success of social surveys. As the field increasingly shifts towards online data collection, a key challenge emerges: how to administer complex measures without compromising data quality or comparability with other survey modes.
The session is proposed by Research Strand 5 of the Survey Futures* project, “Complex measurement in self-completion surveys” which focuses on how to collect measures of i) industry and occupation ii) event histories and retrospective data iii) consent for data linkage and re-contact and iv) cognitive assessments in self-completion surveys.
Each of these measures presents unique challenges. Ensuring that online participants provide sufficient detail to allow for accurate industry and occupation coding in the absence of interviewer probing can be difficult. Retrospective life history data collected in self-completion surveys may be less complete due to lack of interviewer support. Consent rates for data linkage are consistently lower in online-administered surveys. Adapting cognitive assessments designed for in-person administration for use online may not be feasible or may result in significant mode effects.
This session invites researchers to share insights into how to improve the collection of these and other types of complex measures in online self-completion surveys. We welcome submissions that present evidence on mode and measurement effects in these and other complex measures, as well as findings from trials aimed at enhancing data quality when collecting complex measures in online surveys. Submissions employing experimental designs or other innovative methodologies that can inform future survey strategies are especially encouraged.
*Survey Futures is a UKRI-ESRC funded research programme dedicated to ensuring that large-scale social surveys in the UK can innovate and adapt in an evolving landscape. The programme is a multi-institution collaboration between universities and survey practice organisations.
Keywords: self-administered surveys, complex measurements, occupation coding, consent, retrospective data, cognitive function
Ms Sophie Gurr (Office for National Statistics) - Presenting Author
The Centre for Crime and Justice at the Office for National Statistics (ONS) is redesigning the Crime Survey for England and Wales (CSEW) as part of a wider transformation programme.
Following a public consultation in 2022, key changes were made to the CSEW including redesign from a cross-sectional, face-to-face survey to a multi-modal, longitudinal panel design incorporating an online survey mode.
Prior to this, the CSEW remained largely unchanged since its introduction in 1981 due to the importance of maintaining the time-series. The risk of introducing new modes was mitigated by retaining face-to-face interviews at wave 1. Respondents are interviewed again 12 months later (wave 2), currently by telephone and following further development, online.
One of the critical requirements of the CSEW relates to the collection of accurate incidence data from victims of multiple or ‘complex’ crimes. Our Discovery research concluded that whilst an online mode would likely be effective in obtaining data from respondents with no or simple experiences, it is more challenging for victims of multiple, ‘complex’ or series crimes. Research employing a Respondent Centred Design framework generated insight into how respondents conceptualise and articulate their experience of crime. We found variation in the mental models of victims including order of event recall, for example by forward or reverse chronology, severity or impact of incidents.
The structure and wording of the CSEW screener questions have been redesigned accordingly, to encourage respondents to report their experience in a way that aims to increase the accuracy of incidence data. Respondents report their whole experience upfront in the screener module and further questions are asked to establish whether multiple crimes occurred within one incident. The highest priority crime is subsequently determined and recorded, aligning with the relevant code from the Home Office Counting Rules.
Mr Wai Tak Tung (University of Mannheim) - Presenting Author
Dr Alexander Wenz (University of Mannheim)
Digital skills have become important for navigating in today’s information society but are still unevenly distributed in the population. While prior research has mostly focused on studying general Internet use, research on smartphone-specific digital inequalities is still scarce. To date, we have a limited understanding of the distribution of smartphone skills in the population as well as the determinants and consequences of inequalities in smartphone skills. In addition, existing measurement instruments of digital skills mostly rely on survey-based self-reports or small-scale laboratory-based performance tests that are potentially subject to measurement and representation errors.
In this paper, we report the results from a survey experiment comparing a scenario-based measure as an innovative method for measuring smartphone skills with a self-reported skills measure. In the scenario-based measure, respondents are presented with hypothetical scenarios of activities that they would perform on their smartphone, such as buying a train ticket with an app that is not yet installed on their device. They are then asked to correctly order a set of steps to carry out these activities, such as downloading an app from the app store, entering login details, and searching for train connections. In the self-reported measure, respondents are asked to rate their smartphone skills on a scale from 1=Beginner to 5=Advanced. Data were collected in the German Internet Panel, a probability-based online panel of the general population aged 16-75 in Germany, in March 2022. First, we will examine to what extent both measures capture the same construct by conducting an Exploratory Factor Analysis. Second, we will assess whether predictors of smartphone skills vary by how skills are being measured by fitting OLS regression models.
Mrs Jimena Sobrino Piazza (Université de Lausanne) - Presenting Author
Ms Caroline Roberts (Université de Lausanne)
As surveys increasingly transition to online data collection methods, Time Use Surveys (TUS) are no exception. Given the potential gains and the reduced burden for researchers offered by online tools, more and more TUS are being conducted through digital diary data collection methods. However, because of the complexity of the data they collect, TUS have traditionally been highly burdensome for respondents. As best practices for adapting traditional TUS to online formats continue to evolve, finding ways to reduce respondent burden and to motivate participation become essential for ensuring adequate response rates and minimizing the risk of nonresponse bias. This paper presents findings from a pilot study of a TUS that includes a methodological experiment comparing alternative digital diary methods and participation incentives. A total of 4,000 Swiss adults were invited to participate in the online survey, with one-third randomly assigned to use an app-based tool and the remaining participants completing the survey via a browser-based tool. Additionally, an incentive experiment tested strategies to reduce perceived costs, highlight benefits, and build trust. Participants were assigned to one of three groups: an unconditional non-monetary incentive, an unconditional monetary incentive, or an informational leaflet. The results of this experiment aim to provide insights into the relative effectiveness of different digital diary methods for time use surveys and strategies for optimizing participant recruitment.
Mr Calzavara Antoine (Médiamétrie) - Presenting Author
Mrs Le Sager Fabienne (Médiamétrie)
Specifics of individual phone ownership in each houseold present significant challenges to accurately measure mobile phone and smartphone ownership : age-gap, blurred frontier of ownership in the context of a shared used... To address these complexities, Médiamétrie (France) developed innovative strategies to enhance the design of its dedicated self-administered online survey (CAWI). A unique challenge arises in this context, as one household member is tasked with reporting detailed information on behalf of all other household members. This proxy reporting introduces additional complexity in ensuring data accuracy and completeness.
Balancing the precision of data collection with the minimization of respondent burden presents several challenges. Younger respondents may demonstrate limited interest in survey participation, leading to disengagement, while older respondents may encounter difficulties distinguishing smartphones from regular mobile phones, particularly when required to provide information for other household members. Addressing these issues is critical for data quality.
This presentation examines the methodological advancements implemented in the mobile equipment module of the Baromètre des Équipements. Central to these developments is the dynamic adaptation of the questionnaire structure, which utilizes tailored filters to align with the household composition and the responses provided by the proxy respondent. Additionally, verification steps are integrated strategically to validate unlikely responses without overburdening participants or inducing response fatigue, thereby enhancing both data accuracy and respondent engagement.
The proposed solutions are informed by an A/B testing conducted during the second half of 2024. Comparative results from this study provide quantitative evidence of the potential impact of improved questionnaire design on the accuracy of mobile phone and smartphone ownership measurements. In particular, the redesigned module demonstrates significant gains in data precision, especially for younger populations whose information is often collected via proxy reporting.
Professor Lisa Calderwood (Centre for Longitudinal Studies, University College London) - Presenting Author
Mr Matt Brown (Centre for Longitudinal Studies, University College London)
Dr Marc Asensio-Majon (Centre for Longitudinal Studies, University College London)
Dr Cristian Domarchi (University of Southampton)
Dr Olga Maslovskaya (University of Southampton)
Accurately capturing complex phenomena is critical for the success of social surveys. As the field increasingly shifts towards online data collection, a key challenge emerges: how to administer complex measures without compromising data quality or comparability with other survey modes.
In this paper we present a project being conducted as part of Survey Futures* which seeks to address how best to collect i) industry and occupation ii) event histories and retrospective data iii) consent for data linkage and re-contact and iv) cognitive assessments in online self-completion surveys.
Each of these measures presents unique challenges. Ensuring that online participants provide sufficient detail to allow for accurate industry and occupation coding in the absence of interviewer probing can be difficult. Retrospective life history data collected in self-completion surveys may be less complete due to lack of interviewer support. Consent rates for data linkage are consistently lower in online-administered surveys. Adapting cognitive assessments designed for in-person administration for use online may not be feasible or may result in significant mode effects.
In this project, we compile evidence on methods used to collect complex measurements in online self-administered surveys and conduct our own analyses using data from a variety of sources. Our goal is to gather insights into the approaches employed to collect these data, the outcomes achieved, and the recommendations drawn from their application. Our finding will inform the development of best practice guidelines for survey agencies and research organisations.
*Survey Futures is a UKRI-ESRC funded research programme dedicated to ensuring that large-scale social surveys in the UK can innovate and adapt in an evolving landscape. The programme is a multi-institution collaboration between universities and survey practice organisations.
Dr Konstantinos Tsigaridis (UCL)
Dr Alessandra Gaia (UCL)
Dr Vanessa Moulton (UCL) - Presenting Author
Dr Liam Wright (UCL)
Dr Matt Brown (UCL)
Dr Richard Silverwood (UCL)
Accurately measuring cognitive ability is a crucial aspect of many social surveys. Cognitive testing has most commonly been implemented in face-to-face surveys. However, as using the web and other remote methods, including video interviewing, becomes increasingly common, survey researchers are exploring the potential for administering cognitive tests in these modes. A vital consideration is the extent to which data collection mode could lead to variations in measurement and data quality. In this study, we investigate the impact of three survey administration modes – face-to-face, web, and video – on the Backwards Digit Span (BDS), a cognitive assessment that measures working memory. Study participants aged 20 to 40 from across England (n=1510) were randomly assigned to one of nine mode combinations across two survey waves, conducted two weeks apart, and asked to complete the same BDS task at each wave. We aimed to discern how different modes influence performance in the test.
Using generalised linear models and generalised linear mixed models, we analyse mode effects in each survey wave separately and in both waves combined, accounting for individual variability and potential practice effects in the latter analyses. We examine differences by mode in successful test completion and in scores. Further analyses explore mode combinations, subgroup differences (e.g., by gender, age, and education), and secondary outcomes, such as test duration and indicators of cheating or lack of effort.
This study addresses critical challenges in administering complex measures in self-completion surveys. The insights gained will inform best practices for mode selection and measurement design and guide researchers in analysing data collected in mixed-mode studies.