Federated fairness-aware classification under differential privacy

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously achieving differential privacy, algorithmic fairness, and high classification accuracy in federated learning. The authors propose a two-stage algorithm, FDP-Fair, which enables privacy-preserving fair classification under demographic parity constraints in multi-server federated settings, along with a lightweight variant, CDP-Fair, tailored for single-server scenarios. A key contribution is the first systematic decomposition of the excess risk in private fair classification into four distinct sources: inherent classification cost, privacy-induced cost, non-private fairness cost, and private fairness cost, accompanied by rigorous theoretical guarantees. Empirical evaluations on both synthetic and real-world datasets demonstrate that the proposed methods effectively balance privacy, fairness, and accuracy, exhibiting strong practical utility.

Technology Category

Application Category

📝 Abstract
Privacy and algorithmic fairness have become two central issues in modern machine learning. Although each has separately emerged as a rapidly growing research area, their joint effect remains comparatively under-explored. In this paper, we systematically study the joint impact of differential privacy and fairness on classification in a federated setting, where data are distributed across multiple servers. Targeting demographic disparity constrained classification under federated differential privacy, we propose a two-step algorithm, namely FDP-Fair. In the special case where there is only one server, we further propose a simple yet powerful algorithm, namely CDP-Fair, serving as a computationally-lightweight alternative. Under mild structural assumptions, theoretical guarantees on privacy, fairness and excess risk control are established. In particular, we disentangle the source of the private fairness-aware excess risk into a) intrinsic cost of classification, b) cost of private classification, c) non-private cost of fairness and d) private cost of fairness. Our theoretical findings are complemented by extensive numerical experiments on both synthetic and real datasets, highlighting the practicality of our designed algorithms.
Problem

Research questions and friction points this paper is trying to address.

federated learning
differential privacy
algorithmic fairness
demographic disparity
fair classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated learning
differential privacy
algorithmic fairness
fair classification
excess risk decomposition
G
Gengyu Xue
Department of Statistics, University of Warwick
Yi Yu
Yi Yu
Department of Statistics, University of Warwick
Statistics