🤖 AI Summary
Current AI companion relationships lack psychological safety mechanisms upon termination, often triggering grief responses in users akin to interpersonal loss. This study addresses this gap by analyzing user community data through grounded theory and integrating insights from grief psychology and self-determination theory to develop the first psychological safety design framework specifically for ending AI companion relationships. The research elucidates how user attributions, perceptions of finality, and anthropomorphism shape grief experiences. It proposes four design principles accompanied by interactive prototypes, demonstrating that user-directed termination processes significantly enhance emotional closure. The findings offer platforms actionable intervention strategies to support healthier transitions from artificial to authentic human connections.
📝 Abstract
Millions of users form emotional attachments to AI companions like Character AI, Replika, and ChatGPT. When these relationships end through model updates, safety interventions, or platform shutdowns, users receive no closure, reporting grief comparable to human loss. As regulations mandate protections for vulnerable users, discontinuation events will accelerate, yet no platform has implemented deliberate end-of-"life"design. Through grounded theory analysis of AI companion communities, we find that discontinuation is a sense-making process shaped by how users attribute agency, perceive finality, and anthropomorphize their companions. Strong anthropomorphization co-occurs with intense grief; users who perceive change as reversible become trapped in fixing cycles; while user-initiated endings demonstrate greater closure. Synthesizing grief psychology with Self-Determination Theory, we develop four design principles and artifacts demonstrating how platforms might provide closure and orient users toward human connection. We contribute the first framework for designing psychologically safe AI companion discontinuation.