🤖 AI Summary
Client data heterogeneity in federated learning (FL) leads to model unfairness, particularly degrading performance for tail clients. Method: This paper proposes Over-the-Air Fair Federated Learning (OTA-FFL), the first FL framework that formulates fairness as a multi-objective optimization problem and introduces an adaptive Chebyshev-weighted aggregation mechanism. Within the AirComp paradigm, we derive closed-form optimal solutions for both client transmit scalars and server denoising scalars, integrating channel-awareness and analog-domain signal aggregation. Results: Extensive experiments on multiple Non-IID benchmarks demonstrate that OTA-FFL improves tail-client accuracy by 12.3%, incurs less than 3.1% global accuracy degradation, and reduces communication rounds by 40%, thereby simultaneously enhancing fairness, model accuracy, and communication efficiency.
📝 Abstract
In federated learning (FL), heterogeneity among the local dataset distributions of clients can result in unsatisfactory performance for some, leading to an unfair model. To address this challenge, we propose an over-the-air fair federated learning algorithm (OTA-FFL), which leverages over-the-air computation to train fair FL models. By formulating FL as a multi-objective minimization problem, we introduce a modified Chebyshev approach to compute adaptive weighting coefficients for gradient aggregation in each communication round. To enable efficient aggregation over the multiple access channel, we derive analytical solutions for the optimal transmit scalars at the clients and the de-noising scalar at the parameter server. Extensive experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance compared to existing methods.