🤖 AI Summary
This study identifies systematic cognitive biases among users of general-purpose large language model (LLM) chatbots for mental health support—specifically, conflating anthropomorphic empathy with professional accountability and erroneously assuming HIPAA or equivalent healthcare regulatory compliance. Drawing on semi-structured interviews with 21 U.S. users, it introduces the concept of “invisible vulnerability”: a cognitive blind spot wherein users underestimate risks associated with emotional data disclosure and overlook LLMs’ absence of regulatory oversight. This construct addresses a critical gap in privacy research concerning generative AI in psychological contexts. The study further documents a misalignment between current regulatory voids and user expectations. In response, it proposes a data protection framework tailored to LLM-based mental health applications, centered on informed design, explicit risk communication, and interdisciplinary governance.
📝 Abstract
Individuals are increasingly relying on large language model (LLM)-enabled conversational agents for emotional support. While prior research has examined privacy and security issues in chatbots specifically designed for mental health purposes, these chatbots are overwhelmingly "rule-based" offerings that do not leverage generative AI. Little empirical research currently measures users' privacy and security concerns, attitudes, and expectations when using general-purpose LLM-enabled chatbots to manage and improve mental health. Through 21 semi-structured interviews with U.S. participants, we identified critical misconceptions and a general lack of risk awareness. Participants conflated the human-like empathy exhibited by LLMs with human-like accountability and mistakenly believed that their interactions with these chatbots were safeguarded by the same regulations (e.g., HIPAA) as disclosures with a licensed therapist. We introduce the concept of "intangible vulnerability," where emotional or psychological disclosures are undervalued compared to more tangible forms of information (e.g., financial or location-based data). To address this, we propose recommendations to safeguard user mental health disclosures with general-purpose LLM-enabled chatbots more effectively.