🤖 AI Summary
This work addresses the challenge of detecting implicit social biases in large language model (LLM) generation—biases that lack explicit linguistic markers and thus evade conventional detection. We propose a highly interpretable bias detection method that integrates nested semantic representations with a context-aware contrastive mechanism. Specifically, it combines attention-weight perturbation analysis and sensitivity modeling of socially salient terms to systematically uncover the semantic pathways underlying biased generation. Additionally, vector-space structural analysis is employed to extract latent bias features from model outputs. Experiments on the StereoSet benchmark demonstrate high multi-dimensional bias detection accuracy, robust discrimination between semantically similar yet bias-divergent texts, and preservation of semantic coherence and generation stability. Our core contribution is the first synergistic application of attention perturbation and structured semantic analysis for explainable detection of implicit bias in LLMs.
📝 Abstract
This paper addresses the issue of implicit stereotypes that may arise during the generation process of large language models. It proposes an interpretable bias detection method aimed at identifying hidden social biases in model outputs, especially those semantic tendencies that are not easily captured through explicit linguistic features. The method combines nested semantic representation with a contextual contrast mechanism. It extracts latent bias features from the vector space structure of model outputs. Using attention weight perturbation, it analyzes the model's sensitivity to specific social attribute terms, thereby revealing the semantic pathways through which bias is formed. To validate the effectiveness of the method, this study uses the StereoSet dataset, which covers multiple stereotype dimensions including gender, profession, religion, and race. The evaluation focuses on several key metrics, such as bias detection accuracy, semantic consistency, and contextual sensitivity. Experimental results show that the proposed method achieves strong detection performance across various dimensions. It can accurately identify bias differences between semantically similar texts while maintaining high semantic alignment and output stability. The method also demonstrates high interpretability in its structural design. It helps uncover the internal bias association mechanisms within language models. This provides a more transparent and reliable technical foundation for bias detection. The approach is suitable for real-world applications where high trustworthiness of generated content is required.