🤖 AI Summary
This study investigates public perceptions, lived experiences, and latent risks associated with using large language models (LLMs) as informal mental health support tools on social media—specifically TikTok. Method: Analyzing over 10,000 user comments, we developed a novel hierarchical coding framework integrating BERT-based supervised classification with qualitative–quantitative mixed methods. Contribution/Results: We present the first systematic characterization of user narratives around LLM-based psychological support: ~20% of comments reflect actual usage, with widespread appreciation for accessibility and emotional validation; concurrently, we identify critical latent risks—including privacy violations, generic or context-insensitive responses, and absence of clinical oversight. The findings underscore the growing socioclinical relevance of AI-delivered mental health support and highlight an urgent need for interdisciplinary clinical validation and robust ethical governance frameworks.
📝 Abstract
The emergence of generative AI chatbots such as ChatGPT has prompted growing public and academic interest in their role as informal mental health support tools. While early rule-based systems have been around for several years, large language models (LLMs) offer new capabilities in conversational fluency, empathy simulation, and availability. This study explores how users engage with LLMs as mental health tools by analyzing over 10,000 TikTok comments from videos referencing LLMs as mental health tools. Using a self-developed tiered coding schema and supervised classification models, we identify user experiences, attitudes, and recurring themes. Results show that nearly 20% of comments reflect personal use, with these users expressing overwhelmingly positive attitudes. Commonly cited benefits include accessibility, emotional support, and perceived therapeutic value. However, concerns around privacy, generic responses, and the lack of professional oversight remain prominent. It is important to note that the user feedback does not indicate which therapeutic framework, if any, the LLM-generated output aligns with. While the findings underscore the growing relevance of AI in everyday practices, they also highlight the urgent need for clinical and ethical scrutiny in the use of AI for mental health support.