Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective

๐Ÿ“… 2025-06-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing evaluation methods for bias in long-text generation by large language models (LLMs) often overlook output variability and deep semantic biases. Method: We propose FiSCo, the first framework to integrate semantic entailment analysis with statistical hypothesis testing at the claim level, formally defining *group counterfactual fairness* for fine-grained semantic fairness quantification. FiSCo decomposes long responses into comparable claim units via semantic decomposition, entailment judgment, and inter-/intra-group similarity modelingโ€”thereby mitigating interference from output stochasticity. Contribution/Results: Experiments on synthetic and human-annotated datasets demonstrate that FiSCo reliably detects latent biases imperceptible to surface-level lexical analysis, outperforming state-of-the-art fairness metrics in both robustness and semantic sensitivity.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) often generate responses with inherent biases, undermining their reliability in real-world applications. Existing evaluation methods often overlook biases in long-form responses and the intrinsic variability of LLM outputs. To address these challenges, we propose FiSCo(Fine-grained Semantic Computation), a novel statistical framework to evaluate group-level fairness in LLMs by detecting subtle semantic differences in long-form responses across demographic groups. Unlike prior work focusing on sentiment or token-level comparisons, FiSCo goes beyond surface-level analysis by operating at the claim level, leveraging entailment checks to assess the consistency of meaning across responses. We decompose model outputs into semantically distinct claims and apply statistical hypothesis testing to compare inter- and intra-group similarities, enabling robust detection of subtle biases. We formalize a new group counterfactual fairness definition and validate FiSCo on both synthetic and human-annotated datasets spanning gender, race, and age. Experiments show that FiSco more reliably identifies nuanced biases while reducing the impact of stochastic LLM variability, outperforming various evaluation metrics.
Problem

Research questions and friction points this paper is trying to address.

Detect subtle semantic biases in LLM long-form responses
Evaluate group-level fairness beyond token-level analysis
Address stochastic variability in LLM outputs for bias detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained semantic computation for bias detection
Claim-level entailment checks for meaning consistency
Statistical hypothesis testing for group fairness
๐Ÿ”Ž Similar Papers
No similar papers found.