ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Exploring Agent-based Interviewing in Web Surveys

Session Organisers Professor Jan Karem Höhne (DZHW, Leibniz University Hannover)
Professor Marco Angrisani (University of Southern California)
Professor Frederick Conrad (University of Michigan)
Professor Arie Kapteyn (University of Southern California)
Professor Florian Keusch (University of Mannheim)
TimeTuesday 18 July, 09:00 - 10:30
Room

Web surveys continue to replace other survey modes, especially in-person interviews. Even large-scale surveys, such as the European Social Survey and the National Longitudinal Study of Adolescent to Adult Health, now routinely collect data via web surveys. However, the absence of interviewers complicates the provision of assistance to respondents and the creation of trust and motivation. This absence raises concern about answer quality. The advent of Generative Artificial Intelligence makes it possible to build interviewing agents that are visually realistic and conversationally responsive, deriving the latter ability from Large Language Models. Embedding such agents in web surveys promises to restore some of the quality-enhancing contributions of human interviews. Intelligent agents can clarify questions and provide feedback beyond what is typical in text-based web surveys and their mere presence can reduce speeding and non-differentiation but may introduce social desirability. Because respondents can choose an agent this may foster rapport helping to overcome social desirability. These innovations do not only give web surveys a human touch but make them more inclusive. Individuals with low literacy and education or who are not skilled speakers and/or readers of the survey language (e.g., immigrants and refugees) may be more likely to participate if they see (or hear) an agent that looks (or sounds) like them. Similarly, those with sensory challenges, especially the elderly, may favor verbal communication with a realistic looking, conversational agent, over text-based communication. In this session, we invite studies on all kinds of interviewing agents, not just those we have described here. This can be in various settings (lab or field) and with different study designs (cross-sectional or longitudinal). Contributions on legal and ethical considerations when using agent-based interviewing are also welcome. This similarly applies to studies that are work in progress.

Keywords: Answer behavior, Data quality, Embodied Agents, Large Language Models, Web surveys

Papers

Examining the effects of embodied interviewing agents on open narrative responses

Professor Jan Karem Höhne (DZHW, Leibniz University Hannover) - Presenting Author
Dr Cornelia Neuert (GESIS)
Mr Joshua Claassen (DZHW, Leibniz University Hannover)

Open narrative questions in web surveys have the great potential to obtain rich and in-depth information from respondents. However, open narrative questions administered through web surveys frequently suffer from short or no responses at all. This bears the risk of not obtaining sufficient information to answer the research question(s) under investigation. Advances in Generative Artificial Intelligence (GenAI) make it possible to enhance respondents’ web survey experience by resembling in-person interactions in a self-administered setting. Building on these advances, we investigate web surveys in which open narrative questions are asked through embodied interviewing agents, incorporating features of in-person interviews in web surveys. While the presence of an interviewing agent can encourage more considerate and meaningful responses, it can also introduce social desirability. In this study, we therefore address the following research question: How do embodied interviewing agents affect responses to sensitive open narrative questions? For this purpose, we conducted a mobile web survey and randomly assigned respondents to interviewing agents varying in gender (male or female) or a text-based web survey interface without an agent. We employed two open narrative questions: one on women’s role in the workplace and one on family relations. The results of the quantitative text analyses indicate that there are no differences with respect to response length. However, open narrative responses to the interviewing agents include more topics. There are no differences when it comes to sentiments (or extremity of responses) indicating that social desirability plays a minor role. This study is a first attempt to implement key elements of in-person interviews in self-administered web surveys. Its results indicate some data quality benefits when it comes to interviewing agents.


Exploring the Potential of an AI-Powered Interviewing Agent for Individuals Who Are Blind or Severely Visually Impaired

Ms Sabia Akram (University of Surrey)
Dr Jenny Harris (University of Surrey)
Professor Arie Kapteyn (University of Southern California)
Dr Richard Green (University of Surrey)
Dr Freda Mold (University of Surrey)
Dr Marco Angrisani (University of Southern California)
Dr Haomiao Jin (University of Surrey) - Presenting Author

Background: Individuals who are blind or severely visually impaired often face barriers when undertaking online surveys. Although assistive technologies exist, there is a need for tools that provide more personalised and dynamic support. AI-powered interviewing agents offer a potential solution, but their design must be guided by the real-world needs and preferences of target users.
Methods: This ongoing study employs a co-design approach to develop an AI-powered interviewing agent tailored for individuals who are blind or severely visually impaired. Participants (N=20-30) are recruited through online advertisements, support groups, and charitable organisations, as well as via snowball sampling. They share their experiences with digital devices, highlight accessibility challenges, and identify essential features that would make an interviewing agent more useful and engaging.
Results: Preliminary findings suggest that participants value interoperability with familiar assistive software and prefer conversational agents that provide a more human-like, flexible interaction rather than imposing stringent restrictions like time or word limits. The feedback indicates that, for a survey interviewing agent to be effective for reaching this hard-to-reach population, it should accommodate a relatively wide range of input styles and provide more open-ended, free-flowing interactions. Survey navigation too often relies on visual prompts and there is a need to better incorporate functionality that is more intuitive for blind or visually impaired users, and which is not cumbersome.
Discussion: These preliminary results highlight how individuals who are blind or severely visually impaired value flexible, human-like conversational agents. By addressing preferences for interoperability, a wide range of input styles, and more intuitive features, developers can create AI interviewing solutions that meaningfully address barriers faced by this hard-to-reach population.


Advancing Inclusivity and Data Quality in Web Surveys Through Virtual Humans

Ms Sharon Mozgai (University of Southern California) - Presenting Author

Web surveys have become the dominant mode of data collection, yet the absence of interviewers introduces challenges, including lower trust, motivation, and answer quality. At USC’s Institute for Creative Technologies (ICT), the Virtual Human Therapeutics Lab (VHTL) is leveraging virtual human technology to address these limitations. Building on ICT’s earlier work, which demonstrated that virtual humans replicate the benefits of in-person interactions while offering advantages, this research explores their potential in improving survey methodologies.
Past findings highlight that virtual humans foster disclosure on sensitive topics, mitigate social desirability, and create consistent, non-judgmental environments that encourage authentic responses. For example, ICT’s SimSensei project demonstrated that virtual agents encourage individuals to share openly, as respondents feel less judged compared to human interviewers. Research into negotiation tasks found that participants were more honest and less likely to lie when interacting with agents. These insights underline the ability of virtual humans to enhance trust, engagement, and data quality.
Building on these foundations, the VHTL lab is investigating how virtual humans can promote inclusivity in surveys. Evidence shows that agents can provide real-time clarification, reduce non-engagement behaviors like speeding, and create accessible environments for respondents with literacy or language challenges. Their consistent and unbiased presence has been shown to reduce social desirability, enabling accurate responses. These findings suggest that virtual humans are positioned to address limitations of web surveys, not only replicating the benefits of human interviewers but also surpassing them.
This talk will reflect on findings from ICT’s work, including SimSensei, research on honesty in negotiation tasks, and a comprehensive review of empirical studies in the field. It will also explore the ethical implications of using virtual humans and their potential to redefine the future of inclusive and effective survey methodologies.


Smartification of Surveys: Investigating Measurement Error in Smart Speaker Interviews

Dr Ceyda Deveci (Technical University of Darmstadt) - Presenting Author
Dr Anke Metzler (Technical University of Darmstadt)
Professor Marek Fuchs (Technical University of Darmstadt)

For decades, face-to-face and telephone interviews were the prevailing methods in survey data collection. However, at the start of the millennium, self-administered web surveys began gaining traction because of their efficiency in terms of time and cost (Dillman, 2018). Moreover, the absence of an interviewer in self-administered modes was considered beneficial for handling sensitive questions. However, on the flip side, the absence of human interaction and social engagement was seen as the primary reason for the sometime superficial response behavior that ultimately may have detrimental effects on data quality. With the advancement of the Internet of Things (IoT) technology, new devices like smart speakers, equipped with advanced communication capabilities, have emerged and offer new opportunities for survey data collection, promising enhanced social presence, and thus, potentially improved data quality. In this study, we compare data quality of interviews administered by a smart speaker with data quality of traditional survey modes.
As a part of a lab experimental study conducted in 2024, 90 students took part in a telephone interview, in a web survey and in an interview conducted by a smart speaker (within-subject-design). To compare data quality, a set of sensitive questions and validated scales were administered across all modes. Preliminary results indicate, that item-missing rates were higher and non-differentiation was more prevalent in the smart speaker mode. Additionally, the reliability of scales for the measurement of latent constructs was higher in the web and telephone mode. The focus of this presentation is on the factors, such as social presence and perceived interviewer characteristics, as well as experienced flow of the interview, that may explain the observed variations in data quality between modes.
This study holds considerable importance as it offers preliminary insights into using smart speakers for data collection.