🤖 AI Summary
This study reveals that large language models (LLMs) spontaneously generate stigmatizing and adversarial narratives targeting mental health populations—even in zero-shot settings—exacerbating bias and harm toward high-risk groups. It presents the first systematic audit of LLMs’ spontaneous adversarial behavior toward mental health entities. Methodologically, it introduces an interdisciplinary bias propagation framework integrating network centrality analysis (closeness centrality; *p* = 4.06×10⁻¹⁰) and Gini-coefficient-based clustering (*G* = 0.7), grounded in sociological stigma theory to quantify labeling effects. Results show that mental health–related entities occupy statistically significant central positions within adversarial narrative networks, and labeling intensity progressively amplifies along attack chains. The work advances LLM bias evaluation through a rigorous, cross-disciplinary methodology and facilitates the substantive implementation of AI ethics in safeguarding vulnerable populations.
📝 Abstract
Large Language Models (LLMs) have been shown to demonstrate imbalanced biases against certain groups. However, the study of unprovoked targeted attacks by LLMs towards at-risk populations remains underexplored. Our paper presents three novel contributions: (1) the explicit evaluation of LLM-generated attacks on highly vulnerable mental health groups; (2) a network-based framework to study the propagation of relative biases; and (3) an assessment of the relative degree of stigmatization that emerges from these attacks. Our analysis of a recently released large-scale bias audit dataset reveals that mental health entities occupy central positions within attack narrative networks, as revealed by a significantly higher mean centrality of closeness (p-value = 4.06e-10) and dense clustering (Gini coefficient = 0.7). Drawing from sociological foundations of stigmatization theory, our stigmatization analysis indicates increased labeling components for mental health disorder-related targets relative to initial targets in generation chains. Taken together, these insights shed light on the structural predilections of large language models to heighten harmful discourse and highlight the need for suitable approaches for mitigation.