ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Promises and Problems of AI Chatbots for Survey Research

Session Organisers Mrs Anna Lena Fehlhaber (Leibniz University Hannover)
Dr Ivar Krumpal (University of Leipzig)
Dr Anatol-Fiete Näher (Hasso-Plattner-Institut Potsdam)
TimeTuesday 18 July, 09:00 - 10:30
Room

The advent of artificial intelligence (AI) and natural language processing (NLP) technologies is revolutionizing data collection methodologies, particularly in the realm of survey interviews.
Traditional surveys rely on standardized questionnaires with logic-based branching, where specific responses trigger subsequent questions. This structured approach ensures consistency and comparability of data but may lack flexibility and adaptability.
Human interviewers offer more adaptability but introduce their own challenges, such as personal biases and inconsistencies in question delivery. Moreover, participants might alter their responses due to perceived social desirability when interacting with human interviewers, affecting data reliability and validity.
AI chatbots present a promising alternative by combining the standardization of traditional surveys with the adaptability of human interviewers. They offer enhanced anonymity, potentially leading to more candid responses and reducing social desirability bias. The automated nature of AI chatbots ensures consistency in questioning, further mitigating interviewer-related biases. Advanced NLP algorithms enable these chatbots to dynamically adapt to the flow of conversation, providing a more natural and engaging interview experience for survey research.
In this session, we invite presentations that explore the innovative use of AI chatbots in survey data collection. We welcome contributions on the following topics:
1. Technical Aspects: Training and fine-tuning AI chatbots for autonomous interview settings.
2. Ethical Considerations: Addressing ethical issues and data privacy in deploying AI for data collection.
3. Comparative Studies: Empirical findings comparing AI-conducted interviews with traditional methods.
4. Practical Applications: Integrating AI chatbots within existing research frameworks.
As we navigate the intersection of AI and survey research, it is crucial to understand the implications of these technological advancements. This session aims to provide a platform for sharing insights and fostering discussions on the future of AI-driven data collection.

Keywords: AI, chatbots, LLM, survey research

Papers

The Chatbot who Interviewed me: Using Conversational Agents in Survey Research

Dr Benjamin Phillips (Social Research Centre) - Presenting Author
Mr Grant Lester (Social Research Centre)
Dr Dina Neiger (Social Research Centre)
Mr Kipling Zubevich (Social Research Centre)

Conversational agents (CAs)—NLP-based dialogue systems employing generative AI—are increasingly used in health interventions (Ding et al. 2024) and education (Ortega-Ochoa et al. 2024). However, uptake in surveys remains limited, despite prior research on ACASI technology for sensitive questioning (Couper et al. 2003; Turner et al. 1995), T-ACASI/IVR (Couper et al. 2004), and IVR with voice recognition (Schober et al. 2015), along with recent voice-based open-ended collection (e.g., Höhne et al. 2024).
In this presentation, we investigate the potential of CAs in surveys, using an Australian probability-based online panel. Panelists were first surveyed via CAWI about their attitudes and experiences with generative AI technology in their day-to-day lives and their willingness to interact with an AI-driven agent as part of their survey experience. A subset of panelists willing to interact with a CA were then randomly allocated to answer a short survey either via self-complete online or via CA, implemented as a T-ACASI callback. The survey included questions for high-quality benchmarks were available, as well as items subject to social desirability bias. Panelists completing the questions via an AI agent were also asked about their experience of the survey and willingness to participate in future CA surveys.
We examine the predictors of willingness to be surveyed by a CA, completion rates and predictors of survey completion, comparative error of the CAWI and CA arms, and response to questions subject to social desirability bias, as well as subjective perceptions of the interview experience, and drivers of willingness to participate in future CA surveys. We will discuss practical considerations for implementation of CAs in the survey context.