ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Surveys and HCI and UX Research: Applications, challenges, and opportunities

Session Organisers Dr Yongwei Yang (Google)
Mr Aaron Sedley (Whale Acres Consulting)
Dr Mario Callegaro (Callegaro Research Methods Consulting)
TimeTuesday 18 July, 09:00 - 10:30
Room

Over the last decade, surveys have become a common method in human-computer interaction and user experience research (HCI-UXR). They are used to understand usability, evaluate design opportunities, measure user perceptions, or capture broader contexts such as brand equity and demographic, psychographics, or technographic segments. HCI-UXR surveys utilize a range of sampling and data collection designs, from probability-based samples and mixed-mode data collection, to opt-in panels with online data collection, to the unique method of contextual or “in-product” surveys triggered by specific user-product interactions. Additionally, this interdisciplinary field produces new research to improve survey practices, interjecting HCI/UX perspectives and methods into design considerations for survey questions, answer scales, and visual display. Despite these developments, survey-focused academic and professional events (e.g., ESRA, AAPOR) rarely have dedicated discussions about the intersecting practices of surveys and HCI-UXR. In this first-of-its-kind session, we will bring together researchers and practitioners to discuss applications, opportunities, and challenges in two areas:
(1) Applying survey best practices to HCI-UXR
(2) Applying HCI-UXR to improve survey design

Both empirical and didactical research may be considered. We provide a non-exhaustive list of examples below:
(1) Advancing HCI-UXR constructs and theories through survey research
(2) Validating survey questions that measure user experience constructs (e.g., ease of use, helpfulness, satisfaction, trust) and their facets
(3) Combining or comparing the value of sentiment and behavioral data in HCI-UXR context
(4) Addressing HCI-UXR topics through mixed-method approaches that use surveys
(5) Evaluating sampling and triggering strategies for contextual surveys
(6) Developing scalable solutions for utilizing free-text user feedback
(7) Delineating privacy, safety, and ethical considerations with HCI-UXR surveys
(8) Surveying career landscape and opportunities of survey professionals in HCI-UXR

Keywords: survey applications, human-computer interaction, user experience research, interdisciplinary research methods, survey design, construct development and validation

Papers

How surveys became a very popular data collection methods in HCI, strengths and drawbacks

Dr MARIO CALLEGARO (Callegaro Research Limited) - Presenting Author
Mr Aaron Sedley (Whale Acres Consulting)

In Human Computer Interaction (HCI) surveys are used primarily by User Experience Researchers (UXR) at different stages of product development.
In recent years, the usage of surveys has increased substantially as a research method for UXR and among UX designers and product managers.

In this presentation we trace the increasing popularity of surveys over the past 20 years identifying potential reasons and trends

We believe there are many causes for this increase popularity, namely:
Effectiveness of large scale in-product surveys to get immediate feedback on a product, a website, or a tool without requiring a list of email addresses in order to contact the users/customers (e.g. Müller & Sedley, 2014)
Increased availability and ease of use of online survey platforms, now mostly Do It Yourself (DYI)
Increased availability of mailing list of users / customers
Increased availability and ease of use of online panels
Increased awareness of how surveys can quantify attitudes and experiences that cannot be achieved by other methods
Increase awareness that Big Data cannot answer the why, but can go up to certain point in order to measure product usage
Increased “obsession” especially in the tech sector to quantify and measure the impact of projects from research teams

While some of the above causes are considered strengths in survey methodology to answer product research questions, we also experience many drawbacks that we will discuss in the presentation. Among some:

Over relying on questionnaires for complex research questions that should be answered with in depth qualitative methods
Asking survey questions for a level of precision that respondents cannot reasonably answer
Oversurveying users and customers just because it is “easy” and fast

We will end the presentation with a look and the future discussing educational opportunities for UXR in order to get well versed in survey methodology


Why Not Both? Leveraging the Strengths of Qualitative or Quantitative Methods to Understand Users’ Preferences and Sentiments at Scale

Mr Shao Wei Chia (Google) - Presenting Author
Ms Shengjie Zhang (Google)

Concept testing is typically conducted qualitatively (via one-to-one user interviews or focus groups) as we can garner deep insights about users’ preferences and sentiments. However, like the majority of qualitative research methods, insights derived from such methods, while rich, may have limited validity. On the other hand, quantitative surveys allow us to gather insights from a larger group of users in a shorter period of time, but insights from large scale surveys typically lack the depth compared to insights derived from qualitative research. Therefore, in the present study, we apply a mixed method approach to gather users’ feedback on several concepts we designed based on a cross-functional design sprint. First, we conducted a series of one-to-one user interviews to solicit qualitative feedback from users. Then, based on the interviews, we developed a survey to validate the findings we observed from the user interviews.

In the survey design, we presented respondents with the same three concepts that were presented to participants during the user interviews. Since it is an online survey, we were not able to verbally talk through each concept with users. Thus, we provided a brief overview of each concept including screenshots of key features to ensure all respondents will have a similar understanding of each concept’s functionality. We then asked a series of questions related to each concept including: level of comfort in engaging with each concept and reasons for (dis)comfort. The order of the concepts are randomized.

The present study exemplifies a way to combine qualitative and quantitative methods to ensure we obtain rich insights from users but also optimizing for scalability, which in turn reduces the effort needed to conduct a robust concept testing.


Developing a Valid and Reliable Team Functioning Scale to Enhance Understanding of Product Development Effectiveness

Ms Marie Huber (Google) - Presenting Author
Ms Qiao Ma (Google)
Ms Claire Taylor (Google)

This paper details the rigorous development and validation of a novel team functioning scale tailored for product development teams at Google. Recognizing the critical role of teamwork in driving success within fast-paced, complex environments, this research addresses the challenge of accurately measuring team functioning – a multifaceted construct impacting productivity and product velocity.

A multi-phase process grounded in established psychometric principles guided the scale's development. Beginning with a comprehensive literature review to define the domain and generate potential items, the process incorporated subject matter expert reviews and cognitive interviews with product team members to ensure content validity and clarity. Subsequent analysis of data from large internal samples, using principal component and exploratory factor analysis, revealed a four-factor structure encompassing: (1) team processes and visibility, (2) team culture, (3) strategic alignment, and (4) balanced team workload.

This paper presents 20-, 12-, 8-, and 4-item versions of the scale, highlighting the rationale and suitability of each for different research or practical applications. It details extensive validation efforts, including assessments of dimensionality, reliability (using Cronbach’s alpha and McDonald’s omega), and convergent validity.

By providing a psychometrically sound instrument for measuring team functioning, this research enables a deeper understanding of team dynamics and facilitates data-driven interventions to enhance teamwork within product development. The scale offers valuable insights for both researchers and practitioners seeking to optimize team performance and drive organizational change.


Quality In, Quality Out: Investigating the Impact of Survey Response Quality on Data-Driven Persona Development

Dr Bernard J. Jansen (Qatar Computing Research Institute)
Dr Joni Salminen (University of Vaasa) - Presenting Author
Dr João M. Santos (Instituto Universitário de Lisboa (ISCTE-IUL))
Ms Marwan M. Akari (Qatar Foundation)
Ms Soon-gyo Jung (Qatar Computing Research Institute)
Dr Kholoud Khalil Aldous (Qatar Computing Research Institute)
Dr Ali Farooq (University of Strathclyde)
Mrs Safa R. Amin (Carnegie Mellon University in Qatar)
Ms Danial Amin (University of Vaasa)

This paper examines the critical impact of survey data quality on persona generation, addressing fundamental challenges in collecting and validating user data for effective user representation. Although surveys are a primary data collection method for persona development, the effect of the quality and reliability of survey responses on the validity of resulting personas has not been systematically investigated in prior work. Our research is, as far as we know, the first of its kind to specifically address the effect of data quality on data-driven persona development.

We first discuss how careless responses, systematic biases, and fraudulent submissions can compromise persona accuracy and effectiveness. Synthesizing literature from survey methodology, data quality assessment, and persona development domains, we establish a framework for evaluating and improving survey data used in persona creation, which involves identifying key factors affecting survey data quality, including response patterns, attention checks, and participant engagement; as well as specific metrics for assessing survey data quality in the context of persona development.

Our empirical findings suggest that traditional survey validation measures may need adaptation for persona-specific applications, particularly when dealing with behavioral and demographic data. We conclude with practical recommendations for improving survey data quality in persona generation, including enhanced questionnaire design, strategic placement of attention checks, and effective data cleaning protocols. We also discuss methodological approaches for detecting and mitigating low-quality survey responses, including advanced statistical techniques and automated validation methods.

The findings contribute to the understanding of how survey data quality affects persona development and provide actionable insights for researchers and practitioners in user experience design and market research. The research also identifies areas requiring further investigation, particularly regarding the balance between data quality measures and participant engagement in survey-based persona development.