Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies systematic cognitive biases among users of general-purpose large language model (LLM) chatbots for mental health support—specifically, conflating anthropomorphic empathy with professional accountability and erroneously assuming HIPAA or equivalent healthcare regulatory compliance. Drawing on semi-structured interviews with 21 U.S. users, it introduces the concept of “invisible vulnerability”: a cognitive blind spot wherein users underestimate risks associated with emotional data disclosure and overlook LLMs’ absence of regulatory oversight. This construct addresses a critical gap in privacy research concerning generative AI in psychological contexts. The study further documents a misalignment between current regulatory voids and user expectations. In response, it proposes a data protection framework tailored to LLM-based mental health applications, centered on informed design, explicit risk communication, and interdisciplinary governance.

Technology Category

Application Category

📝 Abstract
Individuals are increasingly relying on large language model (LLM)-enabled conversational agents for emotional support. While prior research has examined privacy and security issues in chatbots specifically designed for mental health purposes, these chatbots are overwhelmingly "rule-based" offerings that do not leverage generative AI. Little empirical research currently measures users' privacy and security concerns, attitudes, and expectations when using general-purpose LLM-enabled chatbots to manage and improve mental health. Through 21 semi-structured interviews with U.S. participants, we identified critical misconceptions and a general lack of risk awareness. Participants conflated the human-like empathy exhibited by LLMs with human-like accountability and mistakenly believed that their interactions with these chatbots were safeguarded by the same regulations (e.g., HIPAA) as disclosures with a licensed therapist. We introduce the concept of "intangible vulnerability," where emotional or psychological disclosures are undervalued compared to more tangible forms of information (e.g., financial or location-based data). To address this, we propose recommendations to safeguard user mental health disclosures with general-purpose LLM-enabled chatbots more effectively.
Problem

Research questions and friction points this paper is trying to address.

Examining privacy concerns in general-purpose LLM chatbots for mental health
Identifying misconceptions about chatbot accountability and data protection
Addressing undervalued emotional disclosures in AI mental health interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-structured interviews to assess user concerns
Concept of intangible vulnerability for emotional data
Recommendations to secure mental health disclosures
🔎 Similar Papers
No similar papers found.
J
Jabari Kwesi
Duke University
J
Jiaxun Cao
Duke University
R
Riya Manchanda
Duke University
Pardis Emami-Naeini
Pardis Emami-Naeini
Assistant Professor, Computer Science Department, Duke University
PrivacySecurityHuman-Computer InteractionUsability