Informing Robot Wellbeing Coach Design through Longitudinal Analysis of Human-AI Dialogue

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical gap in understanding the long-term interaction dynamics between humans and AI-powered well-being coaches in real-world settings, which has hindered the effective design of such systems. Drawing on a longitudinal dataset of 4,352 dialogue messages exchanged between 38 university students and a large language model–driven well-being coach, the research presents the first large-scale qualitative content analysis of authentic, extended human–AI conversations. It identifies three core interaction patterns: users proactively steering conversation topics, actively seeking guidance, and expressing emotions within supportive dialogues. These findings offer empirical grounding and actionable design principles for developing AI well-being coaches that respect user autonomy, provide appropriately calibrated scaffolding, and adhere to ethical boundaries.

Technology Category

Application Category

📝 Abstract
Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions.
Problem

Research questions and friction points this paper is trying to address.

human-AI dialogue
robot wellbeing coach
longitudinal interaction
interaction dynamics
wellbeing support
Innovation

Methods, ideas, or system contributions that make the work stand out.

longitudinal analysis
human-AI dialogue
wellbeing coach
LLM-based agent
user autonomy
🔎 Similar Papers
No similar papers found.
K
Keya Shah
SMART Lab, New York University, Abu Dhabi, United Arab Emirates
H
Himanshi Lalwani
SMART Lab, New York University, Abu Dhabi, United Arab Emirates
Z
Zein Mukhanov
SMART Lab, New York University, Abu Dhabi, United Arab Emirates
Hanan Salam
Hanan Salam
SMART lab @NYU Abu Dhabi / Co-founder of Women in AI
Artificial IntelligenceHuman-Machine InteractionHuman-Robot Interaction