ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

UX, user-centred, and respondent-centred design: its application and development

Session Organisers Dr Aditi Das (National Centre for Social Research (NatCen))
Ms Sierra Mesplie-Escott (The National Centre for Social Research )
Ms Sophie Pilley (The National Centre for Social Research )
TimeTuesday 18 July, 09:00 - 10:30
Room

Survey research commissioners and survey designers are increasingly paying attention to respondents' experiences in completing their surveys, especially as more surveys move to self-completion modes. As such, commissioners and designers are examining their respondents' journeys, measurement instruments, and data user needs to identify how to optimise the user experience by applying design thinking. Some surveys can feature design thinking from inception, allowing many study components to be optimised for the respondents' experience. Other long-standing surveys contend with redesigning components to meet established data-user needs and commissioner requirements.
In this session, we invite practitioners to showcase work undertaken using user/ respondent-centred design thinking or UX research methods in the development of a survey. Examples could include, but are not limited to, the following:
- Usability testing of survey questionnaires, respondent communications, interviewer training, or other survey elements
- Visual design of survey elements, including design for responsiveness and accessibility
- Applying UX principles or frameworks to:
o survey instrument development and testing
o Mapping the respondent or interviewer's journey through the survey
o Rethinking conventional survey methods practices
We invite practitioners to present findings from developing or re-developing a survey featuring UX methods, user-centred or respondent-centred design thinking.

Keywords: Respondent experience, Design thinking, UX, UI, Respondent-centred design, Usability testing, Visual design, Survey instrument development, Accessibility, Interviewer journey

Papers

Human-Centered Redesign to Improve Survey Quality and Reduce Respondent Burden

Mr Brad Edwards (Westat) - Presenting Author
Ms Danielle Mayclin (Westat)
Mr Jesus Arrue (Westat)
Ms Angie Kistler (Westat)
Ms Lena Centeno (Westat)
Ms Casey Fernandes (Westat)

While most surveys are about people, the individuals who contribute their data or collect it are often overlooked. This paper is a case study about an ongoing survey shifting its longstanding researcher-driven design to better address the needs of these people.
The US has a complex, fragmented health care system (18% of GDP). Since 1996 the Medical Expenditure Panel Survey (MEPS) has produced annual data on the system. About 10,000 households are sampled each year for five 90-minute face-to-face interviews over 30 months. The exceedingly high respondent burden includes keeping records of all health care events and completing several self-administered questionnaires (SAQs). The survey began converting the paper SAQs to a web + paper multimode design in 2022. This presentation is about initial experiences using a human-centered, UX approach to move some elements from the face-to-face interview to the web.
Human-centered design (HCD) shares many elements with respondent-centered design. HCD is a problem-solving approach with four main principles: Be people-centered; find the right problem; think of everything as a system; and intervene in small and simple ways. The HCD process includes rapid prototyping, user feedback, and frequent iteration. (We also call it human-centered rather than respondent-centered because we don’t envision eliminating the MEPS interviewer entirely and must keep both actors in mind.) HCD is especially well-suited to redesigns of complex systems.
We sketch the traditional respondent-interviewer journey in collecting medical event data. We highlight a key finding from a focus group with MEPS respondents, then describe a design process to capture key data elements about each event in a web questionnaire. We tested the resulting two-screen design with new respondents in two iterations. We present key findings and discuss next steps, which


Incorporating Data Rotation into the UK's Transformed Labour Force Survey using Respondent Centred Design

Dr Adam Kelly (Office for National Statistics) - Presenting Author
Miss Lauren Carter (Office for National Statistics)

The UK Office for National Statistics (ONS) has been working to transform their Labour Force Survey. With an online-first approach the Transformed Labour Force Survey (TLFS) is achieving improved response rates. However, partial rates continue to provide an area for potential improvement. Enhancing respondent experience across the entire survey journey is essential in tackling this issue.
The Research and Design (R&D) team at the ONS uses Respondent Centred Design methodology to develop questionnaire content. Employing an Agile delivery approach, the team undertakes a variety of research methods to explore and understand the respondent experience. This presentation demonstrates how this approach was used to incorporate data rotation into the online version of the TLFS.
Data rotation refers to the ‘rolling forward’ of data from one survey wave to the next. Check questions can identify where no changes have occurred, allowing the respondent to skip sections of the survey, reducing survey length and respondent burden. The first step to enabling data rotation in the TLFS was to develop a method of establishing who is living in the sampled household. This is essential for linking individuals across the 5 waves of the survey. Following an evidence-based approach, the R&D team undertook a range of Discovery activities including a review of quantitative data, interviewer focus groups, and public acceptability testing. This allowed the team to gain insights into respondents' mental models which were used to inform initial designs. These designs were qualitatively tested during an Alpha phase.
This presentation will discuss the benefits of data rotation as well as the challenges of implementing it. The presenters will walk through the design process from the early exploratory work right through to the findings from their qualitative testing of designs with respondents. At the end, presenters will share the finalised question designs.


ELSA Event History Calendar User Testing

Miss Sierra Mesplie-Escott (The National Centre for Social Research )
Mr Jerome Swan (The National Centre for Social Research )
Dr Aditi Das (The National Centre for Social Research ) - Presenting Author
Mr Richard Bull (The National Centre for Social Research )
Dr Maria Tsantani (The National Centre for Social Research )
Dr Kirsty Bennett (West Sussex County Council)
Dr Darina Peycheva (UCL)
Ms Kate Taylor (The National Centre for Social Research )

As many surveys continue to transition to online self-completion, they equally require more extensive and complex measures, as these can prove to be burdensome for respondents. For this reason, prioritising respondent experiences in new modes is crucial more so now than ever. This paper presents findings from an in-person moderated usability testing carried out as part of the English Longitudinal Study of Ageing (ELSA). In 2007 ELSA initiated a special sub-study aimed at collecting comprehensive life histories using interviewer assistance. The current iteration transitions to an online format, incorporating many measures from the 2007 survey alongside a newly introduced Event History Calendar (EHC) designed to enhance recall. Unlike the current iteration, the 2007 EHC was designed for interviewers to use to aid respondents in their recall. Research indicates that the EHC can improve memory recall, temporal accuracy, and participant engagement during data face-to-face collection; little is understood about EHC effectiveness within self-completion modes However, it also necessitates participant training and may increase respondent burden.

The usability testing of the ELSA EHC included cognitive interviewing techniques to evaluate the EHC's effectiveness as a recall instrument within the ELSA online survey context. The testing employed both a Question and Answer Model and the Human Action Cycle framework to assess participants’ cognitive processes and interface interactions. Findings from this testing informed developmental changes to the EHC. The implications of these findings for survey implementation and respondent burden will be discussed, highlighting the balance between enhanced data quality and respondent experience.


Cracking the Code: Thematic Analysis for Question Design

Dr Charlotte Hales (Office for National Statistics) - Presenting Author

When designing data collection questions, such as for surveys, it is optimal to employ a respondent-centered design framework alongside an Agile methodology. Despite the benefits of these approaches, there remains a gap in guidance on interpreting qualitative data during iterative research rounds when developing new question designs in the alpha phase. This talk aims to bridge that gap by offering practical tips and tricks to enhance your analysis, ensuring the creation of quality questions. Structured around distinct stages of thematic analysis, the session will empower researchers with the tools needed to design effective data collection questions.

Coding the data accurately is crucial in the initial phase of question design iteration work. Establishing a robust coding frame ensures that all relevant respondent contexts are captured accurately. By meticulously categorising responses, researchers can reflect on the nuances of the respondent's context.

Extracting themes from the coding frame involves evaluating the ‘weight’ of quotes and identifying potential misleading data. Researchers must discern the most significant data points while filtering out anomalies. This ensures the themes are comprehensive and accurately represent the underlying patterns in the data.

Reviewing and refining themes is essential for the final findings pack to help tell the story of our question testing. Buy-in for change can be challenging, so it is critical to help stakeholders navigate these findings to ensure adoption of the final questions.

Quality assurance throughout this process is critical to avoid researcher bias. Implementing rigorous checks at each stage ensures the integrity of the data and the reliability of the findings.

By maintaining high standards of thematic analysis, researchers can deliver insights that are both accurate and actionable, ultimately enhancing the effectiveness of their qualitative research.


A Recipe to Handle Receipts? Usability Testing the Receipt Scanning Function of an App-based Household Budget Diary

Mr Johannes Volk (Assistent head of section) - Presenting Author
Mr Lasse Häufglöckner (research associate)

As part of the EU project Smart Survey Implementation (SSI), the Federal Statistical Office of Germany (Destatis) is participating in the development of Smart Surveys. Smart Surveys combine traditional question-based surveys and smart features, that collect data by accessing device sensor data. One smart feature is a receipt scanner, making use of the smartphone camera, allowing participants to upload pictures of shopping receipts in a survey app. The aim is to reduce response burden in diary-based household budget surveys.
To achieve this goal respondents must be able to use smart features very easily. Therefore, Destatis conducted qualitative usability tests with 19 participants. Their interaction with the app was observed, followed by an interview on their user experiences.
Given the choice between manual input and using the scanner, respondents prefer the scan function to record purchases. Participants appreciate the fast and easy way to record receipts, compared to the manual input of purchases.
All participants were able to use the scan function, although user friendliness of the current state of development proved to be insufficient. Respondents do not accept having to correct data, as the effort involved is perceived as too high and results are expected to be very accurate.
The study shows that a receipt scanning function per se is highly appreciated. However, in order to be used by the respondents, it is imperative that the function works perfectly and quickly, that its operation is easy to understand and involves little effort. Concerning the further development of this smart feature, the results confirm us in our approach and show at the same time, where improvements are needed.


Transforming the Crime Survey for England & Wales (CSEW): Balancing respondent needs with complex data requirements

Miss Bethan Jones (Office for National Statistics) - Presenting Author

The Qualitative and Data Collection Methodology (QDCM) team at the Office for National Statistics (ONS) were tasked with redesigning the screener and victimisation modules of the Crime Survey for England and Wales (CSEW). These modules measure the incidence and prevalence of crime, which comprise the CSEW’s primary outputs, and collect other details on the nature and costs of crime. In 2022, the survey moved to a longitudinal design, where Wave 1 is collected face to face and Wave 2 is by telephone. The questions have been largely unchanged for 40 years and can be difficult for both interviewers and respondents to complete. Additionally, following a public consultation, a desire to explore the feasibility of moving to an online, self-complete mode was identified. Our programme of work looked to use a Respondent Centred Design Framework (RCDF) to redesign these screener modules and address issues with the existing CSEW questions, as identified in our previous work (Discovery part 1), and appropriately adapt them for an online, self-completion mode.

We conducted mental models research as part of this redesign. Mental models research is a qualitative research method approach, and an important part of following a RCDF. Collecting and understanding respondent mental models allows us to unpick respondents’ thought processes, understanding and expectations to try to reduce burden and increase usability and participation. We aimed to understand how people conceptualised crime and recalled their experiences, including the terminology they used organically, to inform our question redesign.

Our Discovery part 2 work included:

· mental models research into how people articulated and recalled their experiences of crime
· reviewing data requirements
· examining past CSEW data
· creating user journeys

This presentation will outline our Discovery part 2 work, what we did next and how these methods informed our understanding to substantially redesign the screener module.


Challenging assumption-based design: red and green flag design statements

Miss Alice Toms (Office for National Statistics) - Presenting Author
Miss Laura Wilson (Office for National Statistics)

It can be easy to refer to opinions or make assumptions about how we should design our surveys. This is particularly easy to do when we are under increasing pressure to complete work quickly, or with limited resources. However, designing questionnaires, letters and respondent journeys based on opinions and assumptions can risk us:

- designing the wrong thing, content or fix/ solution
- creating surveys that are not well understood and cannot be used resulting in increased respondent burden
- collecting data, which are inaccurate, unreliable, or poor quality

It is important that our design choices are user centred and user led, and by ‘user’ we mean the respondent. This means that designs are based on research and evidence, and not our personal views. This helps us ensure the survey we are designing meets the needs of our respondents thereby increasing the chances of meeting our surveys goals.

There are some phrases that you can listen out for when designing surveys that show that evidence is not being used to inform the design decisions. We have called these ‘red flag statements’ and have created a series of posters to help you to identify them in your work. Each poster gives an example of red flag statements which are sometimes heard when designing surveys, e.g. “I think that would confuse respondents”. The red flag statements are informed by opinions and assumptions, which means they should be challenged. Next to these red flag statements are examples of green flag statements. These statements give examples of what you should expect to hear when decisions are being made on evidence rather than opinion.

In our talk we step through each of the posters and share the red and green flag statements. These have been published as UK Government Data Quality Hub survey design best practice guidance.