🤖 AI Summary
This study addresses a critical gap in existing research, which has predominantly focused on the efficacy of AI-provided emotional support while overlooking how such support is co-constructed through interaction. Reconceptualizing AI emotional support as a sociotechnical process shaped by community negotiation, this work analyzes qualitative data from user–AI companion dialogues and associated community discussions on Reddit. The analysis reveals three core interactional mechanisms—empathic validation, reflective questioning, and a sense of companionship—as well as three key tensions that emerge in practice. By foregrounding the role of social context and interactional dynamics in shaping AI-mediated support, the study offers both theoretical grounding and practical insights for designing responsible, context-sensitive affective support systems.
📝 Abstract
AI companion chatbots are increasingly used for emotional support, with prior work in the domain predominantly documenting their mixed psychosocial impacts, including both increased emotional expression and heightened loneliness. However, most existing research primarily focuses on outcome-level effects, offering limited insight into how emotional support is produced through interaction. In this paper, we examine emotional support as an interactional and socially situated process. Drawing on qualitative analysis of Reddit discussions, we analyze how users engage with AI companions and how these interactions are interpreted and contested within online communities. We show that emotional support is coconstructed through conversational mechanisms such as validation, reflective prompting, and companionship, while also giving rise to tensions including support versus dependency, validation versus delusion, and accessibility versus harm. Importantly, support extends beyond human AI interaction and is shaped by community responses that legitimize or challenge AI-mediated care. Hence, we reconceptualize AI emotional support as a negotiated socio-technical process and derive implications for the design of responsible, context-sensitive AI systems.