AI Conversational Interviewing: Transforming Surveys with LLMs as Adaptive Interviewers

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Traditional survey methods face a trade-off between depth and scalability: structured questionnaires scale well but lack expressive flexibility, whereas in-depth interviews yield rich insights yet are labor-intensive and difficult to scale. Method: This study conducts the first controlled experimental evaluation of large language models (LLMs) as adaptive, conversational interviewers—specifically for political topics—comparing AI- and human-administered interviews across data quality, participant engagement, and operational efficiency. We propose a design framework that reconciles standardization with conversational adaptability, integrating structured questionnaire logic, real-time response generation, and multi-dimensional evaluation metrics (e.g., protocol adherence, response quality, engagement). Contribution/Results: AI-conducted interviews achieve data quality comparable to human interviews, demonstrate substantially improved scalability, and elicit positive participant feedback. This work establishes a novel paradigm for high-fidelity, large-scale qualitative data collection in the social sciences.

Technology Category

Application Category

📝 Abstract
Traditional methods for eliciting people's opinions face a trade-off between depth and scale: structured surveys enable large-scale data collection but limit respondents' ability to express unanticipated thoughts in their own words, while conversational interviews provide deeper insights but are resource-intensive. This study explores the potential of replacing human interviewers with large language models (LLMs) to conduct scalable conversational interviews. Our goal is to assess the performance of AI Conversational Interviewing and to identify opportunities for improvement in a controlled environment. We conducted a small-scale, in-depth study with university students who were randomly assigned to be interviewed by either AI or human interviewers, both employing identical questionnaires on political topics. Various quantitative and qualitative measures assessed interviewer adherence to guidelines, response quality, participant engagement, and overall interview efficacy. The findings indicate the viability of AI Conversational Interviewing in producing quality data comparable to traditional methods, with the added benefit of scalability. Based on our experiences, we present specific recommendations for effective implementation.
Problem

Research questions and friction points this paper is trying to address.

Trade-off between depth and scale in traditional survey methods
Potential of LLMs to replace human interviewers for scalable conversational interviews
Assessment of AI Conversational Interviewing performance and improvement opportunities
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI replaces human interviewers for scalability
LLMs conduct conversational interviews effectively
Controlled study compares AI and human interviewers
🔎 Similar Papers
No similar papers found.