ESRA logo

ESRA 2025 Preliminary Program

              



All time references are in CEST

Complex measurements in online self-completion surveys

Session Organisers Dr Cristian Domarchi (University of Southampton)
Professor Lisa Calderwood (UCL Centre for Longitudinal Studies)
Mr Curtis Jessop (National Centre for Social Research)
TimeThursday 17 July, 09:00 - 10:30
Room Ruppert D - 0.22

Accurately capturing complex phenomena is critical for the success of social surveys. As the field increasingly shifts towards online data collection, a key challenge emerges: how to administer complex measures without compromising data quality or comparability with other survey modes.

The session is proposed by Research Strand 5 of the Survey Futures* project, “Complex measurement in self-completion surveys” which focuses on how to collect measures of i) industry and occupation ii) event histories and retrospective data iii) consent for data linkage and re-contact and iv) cognitive assessments in self-completion surveys.

Each of these measures presents unique challenges. Ensuring that online participants provide sufficient detail to allow for accurate industry and occupation coding in the absence of interviewer probing can be difficult. Retrospective life history data collected in self-completion surveys may be less complete due to lack of interviewer support. Consent rates for data linkage are consistently lower in online-administered surveys. Adapting cognitive assessments designed for in-person administration for use online may not be feasible or may result in significant mode effects.

This session invites researchers to share insights into how to improve the collection of these and other types of complex measures in online self-completion surveys. We welcome submissions that present evidence on mode and measurement effects in these and other complex measures, as well as findings from trials aimed at enhancing data quality when collecting complex measures in online surveys. Submissions employing experimental designs or other innovative methodologies that can inform future survey strategies are especially encouraged.

*Survey Futures is a UKRI-ESRC funded research programme dedicated to ensuring that large-scale social surveys in the UK can innovate and adapt in an evolving landscape. The programme is a multi-institution collaboration between universities and survey practice organisations.

Keywords: self-administered surveys, complex measurements, occupation coding, consent, retrospective data, cognitive function

Papers

Effects of alternative digital diary methods on willingness to participate in a time use survey

Mrs Jimena Sobrino Piazza (Université de Lausanne) - Presenting Author
Ms Caroline Roberts (Université de Lausanne)

As surveys increasingly transition to online data collection methods, Time Use Surveys (TUS) are no exception. Given the potential gains and the reduced burden for researchers offered by online tools, more and more TUS are being conducted through digital diary data collection methods. However, because of the complexity of the data they collect, TUS have traditionally been highly burdensome for respondents. As best practices for adapting traditional TUS to online formats continue to evolve, finding ways to reduce respondent burden and to motivate participation become essential for ensuring adequate response rates and minimizing the risk of nonresponse bias. This paper presents findings from a pilot study of a TUS that includes a methodological experiment comparing alternative digital diary methods and participation incentives. A total of 4,000 Swiss adults were invited to participate in the online survey, with one-third randomly assigned to use an app-based tool and the remaining participants completing the survey via a browser-based tool. Additionally, an incentive experiment tested strategies to reduce perceived costs, highlight benefits, and build trust. Participants were assigned to one of three groups: an unconditional non-monetary incentive, an unconditional monetary incentive, or an informational leaflet. The results of this experiment aim to provide insights into the relative effectiveness of different digital diary methods for time use surveys and strategies for optimizing participant recruitment.


Enhancing Data Accuracy in CAWI Surveys: Adapting Questionnaire Design for Proxy Reporting in Mobile Equipment Studies

Mr Calzavara Antoine (Médiamétrie) - Presenting Author
Mrs Le Sager Fabienne (Médiamétrie)

Specifics of individual phone ownership in each houseold present significant challenges to accurately measure mobile phone and smartphone ownership : age-gap, blurred frontier of ownership in the context of a shared used... To address these complexities, Médiamétrie (France) developed innovative strategies to enhance the design of its dedicated self-administered online survey (CAWI). A unique challenge arises in this context, as one household member is tasked with reporting detailed information on behalf of all other household members. This proxy reporting introduces additional complexity in ensuring data accuracy and completeness.

Balancing the precision of data collection with the minimization of respondent burden presents several challenges. Younger respondents may demonstrate limited interest in survey participation, leading to disengagement, while older respondents may encounter difficulties distinguishing smartphones from regular mobile phones, particularly when required to provide information for other household members. Addressing these issues is critical for data quality.

This presentation examines the methodological advancements implemented in the mobile equipment module of the Baromètre des Équipements. Central to these developments is the dynamic adaptation of the questionnaire structure, which utilizes tailored filters to align with the household composition and the responses provided by the proxy respondent. Additionally, verification steps are integrated strategically to validate unlikely responses without overburdening participants or inducing response fatigue, thereby enhancing both data accuracy and respondent engagement.

The proposed solutions are informed by an A/B testing conducted during the second half of 2024. Comparative results from this study provide quantitative evidence of the potential impact of improved questionnaire design on the accuracy of mobile phone and smartphone ownership measurements. In particular, the redesigned module demonstrates significant gains in data precision, especially for younger populations whose information is often collected via proxy reporting.


Complex Measurements in Online Self-Completion Surveys

Professor Lisa Calderwood (Centre for Longitudinal Studies, University College London) - Presenting Author
Mr Matt Brown (Centre for Longitudinal Studies, University College London)
Dr Marc Asensio-Majon (Centre for Longitudinal Studies, University College London)
Dr Cristian Domarchi (University of Southampton)
Dr Olga Maslovskaya (University of Southampton)

Accurately capturing complex phenomena is critical for the success of social surveys. As the field increasingly shifts towards online data collection, a key challenge emerges: how to administer complex measures without compromising data quality or comparability with other survey modes.

In this paper we present a project being conducted as part of Survey Futures* which seeks to address how best to collect i) industry and occupation ii) event histories and retrospective data iii) consent for data linkage and re-contact and iv) cognitive assessments in online self-completion surveys.

Each of these measures presents unique challenges. Ensuring that online participants provide sufficient detail to allow for accurate industry and occupation coding in the absence of interviewer probing can be difficult. Retrospective life history data collected in self-completion surveys may be less complete due to lack of interviewer support. Consent rates for data linkage are consistently lower in online-administered surveys. Adapting cognitive assessments designed for in-person administration for use online may not be feasible or may result in significant mode effects.

In this project, we compile evidence on methods used to collect complex measurements in online self-administered surveys and conduct our own analyses using data from a variety of sources. Our goal is to gather insights into the approaches employed to collect these data, the outcomes achieved, and the recommendations drawn from their application. Our finding will inform the development of best practice guidelines for survey agencies and research organisations.

*Survey Futures is a UKRI-ESRC funded research programme dedicated to ensuring that large-scale social surveys in the UK can innovate and adapt in an evolving landscape. The programme is a multi-institution collaboration between universities and survey practice organisations.


Evaluating Mode Effects on a Cognitive Assessment Administered via Face-to-face, Web and Video Interviewing: Insights from New Experimental Data

Dr Konstantinos Tsigaridis (UCL)
Dr Alessandra Gaia (UCL)
Dr Vanessa Moulton (UCL) - Presenting Author
Dr Liam Wright (UCL)
Dr Matt Brown (UCL)
Dr Richard Silverwood (UCL)

Accurately measuring cognitive ability is a crucial aspect of many social surveys. Cognitive testing has most commonly been implemented in face-to-face surveys. However, as using the web and other remote methods, including video interviewing, becomes increasingly common, survey researchers are exploring the potential for administering cognitive tests in these modes. A vital consideration is the extent to which data collection mode could lead to variations in measurement and data quality. In this study, we investigate the impact of three survey administration modes – face-to-face, web, and video – on the Backwards Digit Span (BDS), a cognitive assessment that measures working memory. Study participants aged 20 to 40 from across England (n=1510) were randomly assigned to one of nine mode combinations across two survey waves, conducted two weeks apart, and asked to complete the same BDS task at each wave. We aimed to discern how different modes influence performance in the test.
Using generalised linear models and generalised linear mixed models, we analyse mode effects in each survey wave separately and in both waves combined, accounting for individual variability and potential practice effects in the latter analyses. We examine differences by mode in successful test completion and in scores. Further analyses explore mode combinations, subgroup differences (e.g., by gender, age, and education), and secondary outcomes, such as test duration and indicators of cheating or lack of effort.
This study addresses critical challenges in administering complex measures in self-completion surveys. The insights gained will inform best practices for mode selection and measurement design and guide researchers in analysing data collected in mixed-mode studies.