Achieving Distributive Justice in Federated Learning via Uncertainty Quantification

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, significant client performance heterogeneity remains inadequately addressed by existing fairness frameworks, lacking systematic modeling grounded in principles of distributive justice. Method: This work formally introduces— for the first time in federated learning—four canonical theories of distributive justice from social philosophy: egalitarianism, utilitarianism, Rawls’ difference principle, and desert-based justice. We unify them into a modular, plug-and-play fairness framework; propose a client-adaptive weighting mechanism leveraging heteroscedastic aleatoric uncertainty; derive a provable generalization error upper bound; and implement fairness-aware model aggregation for multi-paradigm optimization. Results: Extensive evaluation across heterogeneous distribution benchmarks demonstrates that each justice paradigm achieves fairness performance on par with or superior to state-of-the-art baselines, while preserving accuracy. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Client-level fairness metrics for federated learning are used to ensure that all clients in a federation either: a) have similar final performance on their local data distributions (i.e., client parity), or b) obtain final performance on their local data distributions relative to their contribution to the federated learning process (i.e., contribution fairness). While a handful of works that propose either client-parity or contribution-based fairness metrics ground their definitions and decisions in social theories of equality -- such as distributive justice -- most works arbitrarily choose what notion of fairness to align with which makes it difficult for practitioners to choose which fairness metric aligns best with their fairness ethics. In this work, we propose UDJ-FL (Uncertainty-based Distributive Justice for Federated Learning), a flexible federated learning framework that can achieve multiple distributive justice-based client-level fairness metrics. Namely, by utilizing techniques inspired by fair resource allocation, in conjunction with performing aleatoric uncertainty-based client weighing, our UDJ-FL framework is able to achieve egalitarian, utilitarian, Rawls' difference principle, or desert-based client-level fairness. We empirically show the ability of UDJ-FL to achieve all four defined distributive justice-based client-level fairness metrics in addition to providing fairness equivalent to (or surpassing) other popular fair federated learning works. Further, we provide justification for why aleatoric uncertainty weighing is necessary to the construction of our UDJ-FL framework as well as derive theoretical guarantees for the generalization bounds of UDJ-FL. Our code is publicly available at https://github.com/alycia-noel/UDJ-FL.
Problem

Research questions and friction points this paper is trying to address.

Ensuring client-level fairness in federated learning via distributive justice metrics
Flexible framework achieving multiple fairness metrics using uncertainty quantification
Addressing arbitrary fairness metric choices with aleatoric uncertainty weighing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-based client weighing for fairness
Flexible framework for multiple fairness metrics
Fair resource allocation techniques integration