Fairness Regularization in Federated Learning

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, data heterogeneity causes imbalanced client contributions to the global model, resulting in significant performance disparities and posing challenges to fairness guarantees. This paper focuses on performance-balanced fairness and proposes FairGrad and FairGrad*, two gradient-variance regularization methods that explicitly constrain the dispersion of local gradient distributions during client-side updates to mitigate individual loss bias. Theoretical analysis establishes an intrinsic unification between the proposed methods and mainstream fair FL algorithms (e.g., q-Fair FL, AFL). Extensive experiments on heterogeneous benchmarks—including CIFAR-10/100-LT and FEMNIST—demonstrate that our approach simultaneously improves global accuracy and substantially reduces inter-client performance standard deviation (average reduction of 32.7%), consistently outperforming existing fairness-aware FL algorithms.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has emerged as a vital paradigm in modern machine learning that enables collaborative training across decentralized data sources without exchanging raw data. This approach not only addresses privacy concerns but also allows access to overall substantially larger and potentially more diverse datasets, without the need for centralized storage or hardware resources. However, heterogeneity in client data may cause certain clients to have disproportionate impacts on the global model, leading to disparities in the clients' performances. Fairness, therefore, becomes a crucial concern in FL and can be addressed in various ways. However, the effectiveness of existing fairness-aware methods, particularly in heterogeneous data settings, remains unclear, and the relationships between different approaches are not well understood. In this work, we focus on performance equitable fairness, which aims to minimize differences in performance across clients. We restrict our study to fairness-aware methods that explicitly regularize client losses, evaluating both existing and newly proposed approaches. We identify and theoretically explain connections between the investigated fairness methods, and empirically show that FairGrad (approximate) and FairGrad* (exact) (two variants of a gradient variance regularization method introduced here for performance equitable fairness) improve both fairness and overall model performance in heterogeneous data settings.
Problem

Research questions and friction points this paper is trying to address.

Address fairness disparities in Federated Learning client performance
Evaluate fairness regularization methods in heterogeneous data settings
Propose FairGrad variants to improve fairness and model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fairness regularization in federated learning
Gradient variance regularization methods
Improving fairness and model performance
🔎 Similar Papers
No similar papers found.