🤖 AI Summary
This study addresses a critical gap in understanding the long-term interaction dynamics between humans and AI-powered well-being coaches in real-world settings, which has hindered the effective design of such systems. Drawing on a longitudinal dataset of 4,352 dialogue messages exchanged between 38 university students and a large language model–driven well-being coach, the research presents the first large-scale qualitative content analysis of authentic, extended human–AI conversations. It identifies three core interaction patterns: users proactively steering conversation topics, actively seeking guidance, and expressing emotions within supportive dialogues. These findings offer empirical grounding and actionable design principles for developing AI well-being coaches that respect user autonomy, provide appropriately calibrated scaffolding, and adhere to ethical boundaries.
📝 Abstract
Social robots and conversational agents are being explored as supports for wellbeing, goal-setting, and everyday self-regulation. While prior work highlights their potential to motivate and guide users, much of the evidence relies on self-reported outcomes or short, researcher-mediated encounters. As a result, we know little about the interaction dynamics that unfold when people use such systems in real-world contexts, and how these dynamics should shape future robot wellbeing coaches. This paper addresses this gap through content analysis of 4352 messages exchanged longitudinally between 38 university students and an LLM-based wellbeing coach. Our results provide a fine-grained view into how users naturally shape, steer, and sometimes struggle within supportive human-AI dialogue, revealing patterns of user-led direction, guidance-seeking, and emotional expression. We discuss how these dynamics can inform the design of robot wellbeing coaches that support user autonomy, provide appropriate scaffolding, and uphold ethical boundaries in sustained wellbeing interactions.