Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals that large language models (LLMs) spontaneously generate stigmatizing and adversarial narratives targeting mental health populations—even in zero-shot settings—exacerbating bias and harm toward high-risk groups. It presents the first systematic audit of LLMs’ spontaneous adversarial behavior toward mental health entities. Methodologically, it introduces an interdisciplinary bias propagation framework integrating network centrality analysis (closeness centrality; *p* = 4.06×10⁻¹⁰) and Gini-coefficient-based clustering (*G* = 0.7), grounded in sociological stigma theory to quantify labeling effects. Results show that mental health–related entities occupy statistically significant central positions within adversarial narrative networks, and labeling intensity progressively amplifies along attack chains. The work advances LLM bias evaluation through a rigorous, cross-disciplinary methodology and facilitates the substantive implementation of AI ethics in safeguarding vulnerable populations.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have been shown to demonstrate imbalanced biases against certain groups. However, the study of unprovoked targeted attacks by LLMs towards at-risk populations remains underexplored. Our paper presents three novel contributions: (1) the explicit evaluation of LLM-generated attacks on highly vulnerable mental health groups; (2) a network-based framework to study the propagation of relative biases; and (3) an assessment of the relative degree of stigmatization that emerges from these attacks. Our analysis of a recently released large-scale bias audit dataset reveals that mental health entities occupy central positions within attack narrative networks, as revealed by a significantly higher mean centrality of closeness (p-value = 4.06e-10) and dense clustering (Gini coefficient = 0.7). Drawing from sociological foundations of stigmatization theory, our stigmatization analysis indicates increased labeling components for mental health disorder-related targets relative to initial targets in generation chains. Taken together, these insights shed light on the structural predilections of large language models to heighten harmful discourse and highlight the need for suitable approaches for mitigation.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM-generated attacks on vulnerable mental health groups
Studies bias propagation in attack narratives using network analysis
Assesses stigmatization levels in LLM-generated mental health attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLM attacks on mental health groups
Uses network framework for bias propagation
Assesses stigmatization degree from LLM attacks
🔎 Similar Papers
No similar papers found.
R
Rijul Magu
College of Computing, Georgia Institute of Technology, Georgia, USA
Arka Dutta
Arka Dutta
PhD Student, Rochester Institute of Technology
Natural Language ProcessingComputational Social ScienceAI for Social GoodFATE
S
Sean Kim
College of Computing, Georgia Institute of Technology, Georgia, USA
A
Ashiqur R. KhudaBukhsh
Rochester Institute of Technology, Rochester, New York, USA
Munmun De Choudhury
Munmun De Choudhury
Georgia Institute of Technology
Computational Social ScienceSocial ComputingMental HealthLanguage