AI-Assisted Conversational Interviewing: Effects on Data Quality and User Experience

๐Ÿ“… 2025-04-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Standardized surveys struggle to balance scalability with depth, while traditional qualitative interviews lack procedural consistency. This paper proposes an AI-augmented conversational interviewing framework that integrates real-time interactive probing and automated semantic coding into online questionnaire workflows for the first time. Leveraging an off-the-shelf large language model (LLM) without fine-tuning, the system employs a textbot to dynamically pursue open-ended responses and perform on-the-fly codingโ€”preserving operational consistency. A randomized controlled trial demonstrates that the method significantly enhances response richness and information density in open-ended items; achieves moderate coding accuracy (with slight false-positive bias); and yields only a modest, yet acceptable, reduction in respondent experience. The study empirically validates the feasibility of AI-enhanced qualitative data collection at scale and establishes a novel paradigm for hybrid survey methodology.

Technology Category

Application Category

๐Ÿ“ Abstract
Standardized surveys scale efficiently but sacrifice depth, while conversational interviews improve response quality at the cost of scalability and consistency. This study bridges the gap between these methods by introducing a framework for AI-assisted conversational interviewing. To evaluate this framework, we conducted a web survey experiment where 1,800 participants were randomly assigned to text-based conversational AI agents, or"textbots", to dynamically probe respondents for elaboration and interactively code open-ended responses. We assessed textbot performance in terms of coding accuracy, response quality, and respondent experience. Our findings reveal that textbots perform moderately well in live coding even without survey-specific fine-tuning, despite slightly inflated false positive errors due to respondent acquiescence bias. Open-ended responses were more detailed and informative, but this came at a slight cost to respondent experience. Our findings highlight the feasibility of using AI methods to enhance open-ended data collection in web surveys.
Problem

Research questions and friction points this paper is trying to address.

Bridging standardized surveys and conversational interviews using AI
Evaluating AI chatbots for dynamic probing and live coding
Assessing impact on response quality and user experience
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI chatbots using LLMs for dynamic probing
Interactive coding of open-ended responses
Enhancing survey data quality with AI
๐Ÿ”Ž Similar Papers
No similar papers found.
Soubhik Barari
Soubhik Barari
Research Methodologist / Data Scientist, NORC
Computational Social SciencePublic OpinionSurvey MethodologyAI/NLPElection Analytics
J
Jarret Angbazo
NORC at the University of Chicago
Natalie Wang
Natalie Wang
NORC at the University of Chicago
L
Leah M. Christian
NORC at the University of Chicago
E
Elizabeth Dean
NORC at the University of Chicago
Z
Zoe Slowinski
NORC at the University of Chicago
Brandon Sepulvado
Brandon Sepulvado
NORC at the University of Chicago