ESRA 2025 Preliminary Program
All time references are in CEST
UX, user-centred, and respondent-centred design: its application and development |
Session Organisers |
Dr Aditi Das (National Centre for Social Research (NatCen)) Ms Sierra Mesplie-Escott (The National Centre for Social Research ) Ms Sophie Pilley (The National Centre for Social Research )
|
Time | Tuesday 15 July, 13:45 - 15:00 |
Room |
Ruppert 005 |
Survey research commissioners and survey designers are increasingly paying attention to respondents' experiences in completing their surveys, especially as more surveys move to self-completion modes. As such, commissioners and designers are examining their respondents' journeys, measurement instruments, and data user needs to identify how to optimise the user experience by applying design thinking. Some surveys can feature design thinking from inception, allowing many study components to be optimised for the respondents' experience. Other long-standing surveys contend with redesigning components to meet established data-user needs and commissioner requirements.
In this session, we invite practitioners to showcase work undertaken using user/ respondent-centred design thinking or UX research methods in the development of a survey. Examples could include, but are not limited to, the following:
- Usability testing of survey questionnaires, respondent communications, interviewer training, or other survey elements
- Visual design of survey elements, including design for responsiveness and accessibility
- Applying UX principles or frameworks to:
o survey instrument development and testing
o Mapping the respondent or interviewer's journey through the survey
o Rethinking conventional survey methods practices
We invite practitioners to present findings from developing or re-developing a survey featuring UX methods, user-centred or respondent-centred design thinking.
Keywords: Respondent experience, Design thinking, UX, UI, Respondent-centred design, Usability testing, Visual design, Survey instrument development, Accessibility, Interviewer journey
Papers
ELSA Event History Calendar User Testing
Miss Sierra Mesplie-Escott (The National Centre for Social Research )
Mr Jerome Swan (The National Centre for Social Research )
Dr Aditi Das (The National Centre for Social Research ) - Presenting Author
Mr Richard Bull (The National Centre for Social Research )
Dr Maria Tsantani (The National Centre for Social Research )
Dr Kirsty Bennett (West Sussex County Council)
Dr Darina Peycheva (UCL)
Ms Kate Taylor (The National Centre for Social Research )
As many surveys continue to transition to online self-completion, they equally require more extensive and complex measures, as these can prove to be burdensome for respondents. For this reason, prioritising respondent experiences in new modes is crucial more so now than ever. This paper presents findings from an in-person moderated usability testing carried out as part of the English Longitudinal Study of Ageing (ELSA). In 2007 ELSA initiated a special sub-study aimed at collecting comprehensive life histories using interviewer assistance. The current iteration transitions to an online format, incorporating many measures from the 2007 survey alongside a newly introduced Event History Calendar (EHC) designed to enhance recall. Unlike the current iteration, the 2007 EHC was designed for interviewers to use to aid respondents in their recall. Research indicates that the EHC can improve memory recall, temporal accuracy, and participant engagement during data face-to-face collection; little is understood about EHC effectiveness within self-completion modes However, it also necessitates participant training and may increase respondent burden.
The usability testing of the ELSA EHC included cognitive interviewing techniques to evaluate the EHC's effectiveness as a recall instrument within the ELSA online survey context. The testing employed both a Question and Answer Model and the Human Action Cycle framework to assess participants’ cognitive processes and interface interactions. Findings from this testing informed developmental changes to the EHC. The implications of these findings for survey implementation and respondent burden will be discussed, highlighting the balance between enhanced data quality and respondent experience.
Cracking the Code: Thematic Analysis for Question Design
Dr Charlotte Hales (Office for National Statistics) - Presenting Author
When designing data collection questions, such as for surveys, it is optimal to employ a respondent-centered design framework alongside an Agile methodology. Despite the benefits of these approaches, there remains a gap in guidance on interpreting qualitative data during iterative research rounds when developing new question designs in the alpha phase. This talk aims to bridge that gap by offering practical tips and tricks to enhance your analysis, ensuring the creation of quality questions. Structured around distinct stages of thematic analysis, the session will empower researchers with the tools needed to design effective data collection questions.
Coding the data accurately is crucial in the initial phase of question design iteration work. Establishing a robust coding frame ensures that all relevant respondent contexts are captured accurately. By meticulously categorising responses, researchers can reflect on the nuances of the respondent's context.
Extracting themes from the coding frame involves evaluating the ‘weight’ of quotes and identifying potential misleading data. Researchers must discern the most significant data points while filtering out anomalies. This ensures the themes are comprehensive and accurately represent the underlying patterns in the data.
Reviewing and refining themes is essential for the final findings pack to help tell the story of our question testing. Buy-in for change can be challenging, so it is critical to help stakeholders navigate these findings to ensure adoption of the final questions.
Quality assurance throughout this process is critical to avoid researcher bias. Implementing rigorous checks at each stage ensures the integrity of the data and the reliability of the findings.
By maintaining high standards of thematic analysis, researchers can deliver insights that are both accurate and actionable, ultimately enhancing the effectiveness of their qualitative research.
A Recipe to Handle Receipts? Usability Testing the Receipt Scanning Function of an App-based Household Budget Diary
Mr Johannes Volk (Assistent head of section) - Presenting Author
Mr Lasse Häufglöckner (research associate)
As part of the EU project Smart Survey Implementation (SSI), the Federal Statistical Office of Germany (Destatis) is participating in the development of Smart Surveys. Smart Surveys combine traditional question-based surveys and smart features, that collect data by accessing device sensor data. One smart feature is a receipt scanner, making use of the smartphone camera, allowing participants to upload pictures of shopping receipts in a survey app. The aim is to reduce response burden in diary-based household budget surveys.
To achieve this goal respondents must be able to use smart features very easily. Therefore, Destatis conducted qualitative usability tests with 19 participants. Their interaction with the app was observed, followed by an interview on their user experiences.
Given the choice between manual input and using the scanner, respondents prefer the scan function to record purchases. Participants appreciate the fast and easy way to record receipts, compared to the manual input of purchases.
All participants were able to use the scan function, although user friendliness of the current state of development proved to be insufficient. Respondents do not accept having to correct data, as the effort involved is perceived as too high and results are expected to be very accurate.
The study shows that a receipt scanning function per se is highly appreciated. However, in order to be used by the respondents, it is imperative that the function works perfectly and quickly, that its operation is easy to understand and involves little effort. Concerning the further development of this smart feature, the results confirm us in our approach and show at the same time, where improvements are needed.
Lessons from iterative user testing of an online survey questionnaire: recommendations for online layouts for a range of question types
Ms Mimi Aram-Walker (Verian UK) - Presenting Author
Ms Alice McGee (Verian UK)
This paper draws on recent user testing conducted in-house at Verian that aims to devise a standard online questionnaire theme using evidence-based layouts, and a set of accompanying guidelines per question type. It will summarise the testing carried out to date, the range of methods implored to capture user experience, and discuss five key learnings that have emerged so far:
• Use a simple and intuitive design: basic layouts, colour schemes, navigation elements
• Take a pragmatic approach to additional ‘help’ text and consider presentation: how users interact with error, signposting and help text in practice, balancing the level of information offered against its usefulness
• Optimise bespoke question layouts: designing open text formats to encourage high quality verbatim responses and harness specific device features
• Use dynamic grid designs that suit all device types: exploring user experiences of a range of innovative alternatives to the ‘traditional’ grid design and thoughts on the ‘best’ grid layout
• Avoid, or find alternatives for, over-complicated layouts: outcomes from testing more complex formats such as sliders, ranking and scale questions
Building on these key learnings, this paper will conclude with recommendations for best practice in designing online questionnaires.