Meta-Fair: AI-Assisted Fairness Testing of Large Language Models

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scalability limitations of existing LLM fairness testing—reliant on manual evaluation and static datasets—this paper proposes Meta-Fair, the first automated fairness testing framework that deeply integrates metamorphic testing with large language models. Meta-Fair leverages LLMs to autonomously generate test cases, validate metamorphic relations, and perform automatic classification and multi-dimensional (e.g., gender, race; five categories total) quantitative bias assessment. The project open-sources three tools enabling end-to-end automation. Evaluated on 12 mainstream pre-trained models, it generates 7.9K test cases covering 14 metamorphic relations, achieving a mean precision of 92%. It detects biased reasoning in 29% of tested inferences, with the best-performing bias classifier attaining an F1-score of 0.79. Meta-Fair significantly enhances the scalability, generalizability, and efficiency of LLM fairness testing.

Technology Category

Application Category

📝 Abstract
Fairness--the absence of unjustified bias--is a core principle in the development of Artificial Intelligence (AI) systems, yet it remains difficult to assess and enforce. Current approaches to fairness testing in large language models (LLMs) often rely on manual evaluation, fixed templates, deterministic heuristics, and curated datasets, making them resource-intensive and difficult to scale. This work aims to lay the groundwork for a novel, automated method for testing fairness in LLMs, reducing the dependence on domain-specific resources and broadening the applicability of current approaches. Our approach, Meta-Fair, is based on two key ideas. First, we adopt metamorphic testing to uncover bias by examining how model outputs vary in response to controlled modifications of input prompts, defined by metamorphic relations (MRs). Second, we propose exploiting the potential of LLMs for both test case generation and output evaluation, leveraging their capability to generate diverse inputs and classify outputs effectively. The proposal is complemented by three open-source tools supporting LLM-driven generation, execution, and evaluation of test cases. We report the findings of several experiments involving 12 pre-trained LLMs, 14 MRs, 5 bias dimensions, and 7.9K automatically generated test cases. The results show that Meta-Fair is effective in uncovering bias in LLMs, achieving an average precision of 92% and revealing biased behaviour in 29% of executions. Additionally, LLMs prove to be reliable and consistent evaluators, with the best-performing models achieving F1-scores of up to 0.79. Although non-determinism affects consistency, these effects can be mitigated through careful MR design. While challenges remain to ensure broader applicability, the results indicate a promising path towards an unprecedented level of automation in LLM testing.
Problem

Research questions and friction points this paper is trying to address.

Automated fairness testing for large language models
Reducing reliance on manual evaluation and curated datasets
Uncovering bias using metamorphic testing and LLM capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metamorphic testing for bias detection
LLMs for test case generation
LLMs for output evaluation
🔎 Similar Papers
No similar papers found.
Miguel Romero-Arjona
Miguel Romero-Arjona
PhD Student at Universidad de Sevilla, Spain
Software EngineeringAI4SE
J
José A. Parejo
SCORE Lab, I3US Institute, Universidad de Sevilla, Avda. Reina Mercedes, Seville, 41012, Seville, Spain
J
Juan C. Alonso
SCORE Lab, I3US Institute, Universidad de Sevilla, Avda. Reina Mercedes, Seville, 41012, Seville, Spain
A
Ana B. Sánchez
SCORE Lab, I3US Institute, Universidad de Sevilla, Avda. Reina Mercedes, Seville, 41012, Seville, Spain
A
Aitor Arrieta
Mondragon University, Loramendi Kalea, 4, Mondragon, 20500, Gipuzkoa, Spain
Sergio Segura
Sergio Segura
Professor of Software Engineering at Universidad de Sevilla, Spain
Software TestingSoftware EngineeringAI4SETrustworthy AI