Intrinsic Meets Extrinsic Fairness: Assessing the Downstream Impact of Bias Mitigation in Large Language Models

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the downstream fairness implications of mitigating intrinsic biases in large language models (LLMs). Addressing the risk that socioeconomic biases—particularly in high-stakes domains like finance—may propagate to deployed systems, we propose the first unified evaluation framework to systematically compare two bias-mitigation strategies: concept erasure (an intrinsic intervention) versus counterfactual data augmentation (an extrinsic intervention), under both frozen embedding extraction and fine-tuned classification paradigms. Experiments on real-world financial classification tasks show that concept erasure reduces intrinsic gender bias by 94.9%, improves downstream demographic parity by 82%, and preserves model accuracy. Our key contribution is the empirical demonstration of a strong positive causal transmission from early-stage intrinsic interventions to downstream fairness—revealing a reproducible, quantifiable pathway for pre-deployment bias governance in LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) exhibit socio-economic biases that can propagate into downstream tasks. While prior studies have questioned whether intrinsic bias in LLMs affects fairness at the downstream task level, this work empirically investigates the connection. We present a unified evaluation framework to compare intrinsic bias mitigation via concept unlearning with extrinsic bias mitigation via counterfactual data augmentation (CDA). We examine this relationship through real-world financial classification tasks, including salary prediction, employment status, and creditworthiness assessment. Using three open-source LLMs, we evaluate models both as frozen embedding extractors and as fine-tuned classifiers. Our results show that intrinsic bias mitigation through unlearning reduces intrinsic gender bias by up to 94.9%, while also improving downstream task fairness metrics, such as demographic parity by up to 82%, without compromising accuracy. Our framework offers practical guidance on where mitigation efforts can be most effective and highlights the importance of applying early-stage mitigation before downstream deployment.
Problem

Research questions and friction points this paper is trying to address.

Assessing how intrinsic bias mitigation affects downstream task fairness
Comparing intrinsic unlearning with extrinsic data augmentation for bias reduction
Evaluating bias mitigation effectiveness in financial classification applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating intrinsic bias mitigation via concept unlearning
Comparing extrinsic bias mitigation via counterfactual data augmentation
Using unified framework for financial classification tasks
M
Mina Arzaghi
MILA - Quebec AI Institute, 6666 Saint-Urbain St, Montreal, QC H2S 3H1
A
Alireza Dehghanpour Farashah
MILA - Quebec AI Institute, 6666 Saint-Urbain St, Montreal, QC H2S 3H1
Florian Carichon
Florian Carichon
McGill University
Golnoosh Farnadi
Golnoosh Farnadi
Assistant Professor at McGill University & Mila, Canada CIFAR AI Chair
FATEAlgorithmic FairnessResponsible AIPrivacy-preserving ML