Detecting Linguistic Bias in Government Documents Using Large language Models

πŸ“… 2025-02-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the long-overlooked yet critical problem of detecting implicit linguistic bias in government documentsβ€”a key concern for equitable governance. Existing methods suffer from poor domain adaptability and fail to uncover the mechanisms through which bias operates within policy contexts. To bridge this gap, we introduce DGDB, the first expert-annotated bias dataset specifically designed for Dutch parliamentary documents, covering fine-grained bias types across public policy domains. Methodologically, we integrate domain-adapted BERT-based fine-tuning, zero- and few-shot inference with generative large language models, and expert-driven annotation coupled with interpretable error analysis. Experimental results demonstrate that domain-finetuned BERT models substantially outperform general-purpose LLMs on bias detection. DGDB serves as the first benchmark resource enabling reproducible, multilingual research on governance fairness, providing both foundational data and an extensible methodological framework for fairness assessment in governmental texts.

Technology Category

Application Category

πŸ“ Abstract
This paper addresses the critical need for detecting bias in government documents, an underexplored area with significant implications for governance. Existing methodologies often overlook the unique context and far-reaching impacts of governmental documents, potentially obscuring embedded biases that shape public policy and citizen-government interactions. To bridge this gap, we introduce the Dutch Government Data for Bias Detection (DGDB), a dataset sourced from the Dutch House of Representatives and annotated for bias by experts. We fine-tune several BERT-based models on this dataset and compare their performance with that of generative language models. Additionally, we conduct a comprehensive error analysis that includes explanations of the models' predictions. Our findings demonstrate that fine-tuned models achieve strong performance and significantly outperform generative language models, indicating the effectiveness of DGDB for bias detection. This work underscores the importance of labeled datasets for bias detection in various languages and contributes to more equitable governance practices.
Problem

Research questions and friction points this paper is trying to address.

Detecting bias in government documents
Using large language models for bias detection
Improving governance through equitable practices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned BERT-based models
Dutch Government Data for Bias Detection
Outperform generative language models
πŸ”Ž Similar Papers
No similar papers found.
M
Milena de Swart
Ministerie van OCW, the Hague, the Netherlands
Floris den Hengst
Floris den Hengst
Vrije Universiteit Amsterdam
Reinforcement LearningSafe AIApplied Reinforcement LearningAI in Medicine
J
Jieying Chen
Vrije Universiteit Amsterdam, Amsterdam, the Netherlands