🤖 AI Summary
Current LLM-based mental health support systems often adopt a one-size-fits-all design, failing to accommodate heterogeneous psychological needs and value priorities across user groups. Method: Drawing on large-scale user discussions from six social platforms, we developed a joint analytical pipeline integrating sentiment, opinion, and value-laden discourse analysis, grounded in Value Sensitive Design (VSD) principles. We propose a condition-specific LLM design framework that differentiates requirements by neurodiversity status, high-risk psychiatric conditions, and other clinically relevant subgroups—emphasizing core values including privacy, autonomy, and identity affirmation. Results: Empirical findings reveal consistently positive affective responses among neurodiverse users, whereas individuals with high-risk mental disorders exhibit significantly elevated negative sentiment—validating the necessity of both clinical condition alignment and value embedding. This work contributes a theoretically grounded, empirically validated framework for developing ethically robust, personalized mental health dialogue systems.
📝 Abstract
Large language models (LLMs) chatbots like ChatGPT are increasingly used for mental health support. They offer accessible, therapeutic support but also raise concerns about misinformation, over-reliance, and risks in high-stakes contexts of mental health. We crowdsource large-scale users' posts from six major social media platforms to examine how people discuss their interactions with LLM chatbots across different mental health conditions. Through an LLM-assisted pipeline grounded in Value-Sensitive Design (VSD), we mapped the relationships across user-reported sentiments, mental health conditions, perspectives, and values. Our results reveal that the use of LLM chatbots is condition-specific. Users with neurodivergent conditions (e.g., ADHD, ASD) report strong positive sentiments and instrumental or appraisal support, whereas higher-risk disorders (e.g., schizophrenia, bipolar disorder) show more negative sentiments. We further uncover how user perspectives co-occur with underlying values, such as identity, autonomy, and privacy. Finally, we discuss shifting from "one-size-fits-all" chatbot design toward condition-specific, value-sensitive LLM design.