Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias

📅 2022-12-20
🏛️ North American Chapter of the Association for Computational Linguistics
📈 Citations: 19
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses occupational gender bias in large language models (LLMs). Methodologically, it introduces OccuGender—a novel benchmark—and formalizes verifiable, causally grounded metrics for gender bias assessment. The framework integrates causal modeling, controllable prompt engineering, and contrastive generation analysis to ensure robust, reproducible evaluation. It is the first to systematically apply causal inference—rather than conventional correlational analysis—to LLM bias evaluation. Empirical evaluation across mainstream open-source models (e.g., Llama, Mistral) reveals pervasive occupational gender bias across all tested models. While zero-shot and few-shot prompting demonstrate limited mitigation potential, their effectiveness exhibits clear boundaries. This work establishes a theoretically rigorous, reproducible, and scalable paradigm for fairness assessment in LLMs, advancing both methodological foundations and practical evaluation standards for bias auditing.
📝 Abstract
Generated texts from large language models (LLMs) have been shown to exhibit a variety of harmful, human-like biases against various demographics. These findings motivate research efforts aiming to understand and measure such effects. This paper introduces a causal formulation for bias measurement in generative language models. Based on this theoretical foundation, we outline a list of desiderata for designing robust bias benchmarks. We then propose a benchmark called OccuGender, with a bias-measuring procedure to investigate occupational gender bias. We test several state-of-the-art open-source LLMs on OccuGender, including Llama, Mistral, and their instruction-tuned versions. The results show that these models exhibit substantial occupational gender bias. Lastly, we discuss prompting strategies for bias mitigation and an extension of our causal formulation to illustrate the generalizability of our framework. Our code and data https://github.com/chenyuen0103/gender-bias.
Problem

Research questions and friction points this paper is trying to address.

Measure occupational gender bias in LLMs
Develop causal framework for bias assessment
Propose mitigation strategies for model bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal formulation for bias measurement
OccuGender benchmark for occupational bias
Prompting strategies for bias mitigation
🔎 Similar Papers
No similar papers found.