🤖 AI Summary
In federated learning, severe client data quantity skew critically undermines the robustness of clustered federated learning (CFL), yet existing methods lack systematic evaluation and targeted design for this challenge. This paper presents the first comprehensive empirical analysis of mainstream CFL algorithms under multi-scale quantity skew, revealing consistent performance degradation. To address this, we propose CORNFLQS—a novel framework that jointly optimizes server-side dynamic clustering and client-side adaptive local update strategies. CORNFLQS integrates model similarity measurement, loss-aware grouping, and iterative reclustering to achieve strong robustness against data quantity heterogeneity. Extensive experiments across six image classification benchmarks and 270 Non-IID configurations demonstrate that CORNFLQS achieves the best average ranking in both classification accuracy and clustering quality, significantly outperforming state-of-the-art CFL methods.
📝 Abstract
Federated Learning (FL) is a decentralized paradigm that enables a client-server architecture to collaboratively train a global Artificial Intelligence model without sharing raw data, thereby preserving privacy. A key challenge in FL is Non-IID data. Quantity Skew (QS) is a particular problem of Non-IID, where clients hold highly heterogeneous data volumes. Clustered Federated Learning (CFL) is an emergent variant of FL that presents a promising solution to Non-IID problem. It improves models' performance by grouping clients with similar data distributions into clusters. CFL methods generally fall into two operating strategies. In the first strategy, clients select the cluster that minimizes the local training loss. In the second strategy, the server groups clients based on local model similarities. However, most CFL methods lack systematic evaluation under QS but present significant challenges because of it. In this paper, we present two main contributions. The first one is an evaluation of state-of-the-art CFL algorithms under various Non-IID settings, applying multiple QS scenarios to assess their robustness. Our second contribution is a novel iterative CFL algorithm, named CORNFLQS, which proposes an optimal coordination between both operating strategies of CFL. Our approach is robust against the different variations of QS settings. We conducted intensive experiments on six image classification datasets, resulting in 270 Non-IID configurations. The results show that CORNFLQS achieves the highest average ranking in both accuracy and clustering quality, as well as strong robustness to QS perturbations. Overall, our approach outperforms actual CFL algorithms.