Like a Therapist, But Not: Reddit Narratives of AI in Mental Health Contexts

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how users evaluate their interactions with emotionally supportive AI systems in non-clinical settings, focusing on lived experiences and potential risks. Integrating the Technology Acceptance Model and therapeutic alliance theory, the authors propose a novel theoretically grounded annotation framework, which they apply—combined with large language models and manual analysis—to conduct a large-scale discourse analysis of 5,126 posts from Reddit mental health communities. Findings reveal that user adoption is primarily driven by perceived narrative outcomes, trust, and response quality. Alignment between user tasks/goals and system functionality shows a stronger association with positive affect than emotional bonding, while companionship-oriented usage patterns are linked to alliance ruptures, dependency, and symptom exacerbation. This work offers empirical evidence and a new theoretical lens for the design and ethical evaluation of AI-based psychological support systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used for emotional support and mental health-related interactions outside clinical settings, yet little is known about how people evaluate and relate to these systems in everyday use. We analyze 5,126 Reddit posts from 47 mental health communities describing experiential or exploratory use of AI for emotional support or therapy. Grounded in the Technology Acceptance Model and therapeutic alliance theory, we develop a theory-informed annotation framework and apply a hybrid LLM-human pipeline to analyze evaluative language, adoption-related attitudes, and relational alignment at scale. Our results show that engagement is shaped primarily by narrated outcomes, trust, and response quality, rather than emotional bond alone. Positive sentiment is most strongly associated with task and goal alignment, while companionship-oriented use more often involves misaligned alliances and reported risks such as dependence and symptom escalation. Overall, this work demonstrates how theory-grounded constructs can be operationalized in large-scale discourse analysis and highlights the importance of studying how users interpret language technologies in sensitive, real-world contexts.
Problem

Research questions and friction points this paper is trying to address.

large language models
mental health
emotional support
user evaluation
therapeutic alliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
mental health
therapeutic alliance
discourse analysis
human-AI interaction
🔎 Similar Papers
No similar papers found.