Identifying and Mitigating Social Bias Knowledge in Language Models

📅 2024-08-07
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of simultaneously mitigating social bias and preserving commonsense factual accuracy in large language models (LLMs), this paper introduces BiaScope—the first fine-grained bias evaluation benchmark—and proposes FAST, an interpretable, hierarchical debiasing method. FAST innovates with a bias-knowledge localization mechanism—grounded in layer-wise sensitivity analysis—and lightweight, instance-specific calibration modules, enabling precise intervention without compromising original knowledge fidelity. BiaScope establishes a multi-dimensional joint evaluation paradigm that jointly measures knowledge retention, generalization capability, and fairness. Extensive experiments demonstrate that FAST significantly outperforms state-of-the-art methods across multiple bias benchmarks: it reduces bias by 32.7%, maintains 98.5% factual accuracy, and incurs no performance degradation on downstream tasks.

Technology Category

Application Category

📝 Abstract
Generating fair and accurate predictions plays a pivotal role in deploying large language models (LLMs) in the real world. However, existing debiasing methods inevitably generate unfair or incorrect predictions as they are designed and evaluated to achieve parity across different social groups but leave aside individual commonsense facts, resulting in modified knowledge that elicits unreasonable or undesired predictions. In this paper, we first establish a new bias mitigation benchmark, BiaScope, which systematically assesses performance by leveraging newly constructed datasets and metrics on knowledge retention and generalization. Then, we propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases. FAST identifies the decisive layer responsible for storing social biases and then calibrates its outputs by integrating a small modular network, considering both bias mitigation and knowledge-preserving demands. Comprehensive experiments demonstrate that FAST surpasses state-of-the-art baselines with superior debiasing performance while not compromising the overall model capability for knowledge retention and downstream predictions. This highlights the potential of fine-grained debiasing strategies to achieve fairness in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Identify social bias in language models
Mitigate bias without losing knowledge
Develop new debiasing method FAST
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces BiaScope bias benchmark
Proposes FAST fine-grained debiasing
Integrates modular network for calibration
🔎 Similar Papers
No similar papers found.