🤖 AI Summary
This work addresses the persistence of internalized gender bias in large language models (LLMs) despite current alignment techniques that primarily mitigate only surface-level, explicit bias in model outputs. The authors propose a unified analytical framework that employs shared neutral prompts to simultaneously probe intrinsic gender information encoded in internal representations and explicit bias manifested in generated text. Under this unified protocol, they reveal—for the first time—a consistent correlation between internal and external biases. Their findings demonstrate that while alignment methods suppress overt bias in outputs, latent internal bias remains intact and can be reactivated by adversarial prompts. Evaluations across structured benchmarks and realistic scenarios such as story generation further show that existing supervised fine-tuning–based alignment strategies merely mask, rather than eliminate, encoded biases, with limited generalization to complex, real-world applications.
📝 Abstract
During training, Large Language Models (LLMs) learn social regularities that can lead to gender bias in downstream applications. Most mitigation efforts focus on reducing bias in generated outputs, typically evaluated on structured benchmarks, which raises two concerns: output-level evaluation does not reveal whether alignment modifies the model's underlying representations, and structured benchmarks may not reflect realistic usage scenarios. We propose a unified framework to jointly analyze intrinsic and extrinsic gender bias in LLMs using identical neutral prompts, enabling direct comparison between gender-related information encoded in internal representations and bias expressed in generated outputs. Contrary to prior work reporting weak or inconsistent correlations, we find a consistent association between latent gender information and expressed bias when measured under the unified protocol. We further examine the effect of alignment through supervised fine-tuning aimed at reducing gender bias. Our results suggest that while the latter indeed reduces expressed bias, measurable gender-related associations are still present in internal representations, and can be reactivated under adversarial prompting. Finally, we consider two realistic settings and show that debiasing effects observed on structured benchmarks do not necessarily generalize, e.g., to the case of story generation.