FedLECC: Cluster- and Loss-Guided Client Selection for Federated Learning under Non-IID Data

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of slow convergence and low accuracy in cross-device federated learning under non-IID data distributions and limited communication resources. The authors propose a lightweight client selection strategy that uniquely integrates label-distribution-based client clustering with a local loss–guided mechanism. Specifically, clients are first clustered according to their label distributions, and then clusters exhibiting high local loss—along with their representative members—are prioritized for participation to enhance both the informativeness and diversity of selected clients. Experimental results demonstrate that, under severe label skew, the proposed method improves test accuracy by up to 12%, reduces the number of communication rounds by approximately 22%, and cuts total communication overhead by as much as 50%, thereby significantly boosting model performance while maintaining communication efficiency.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables distributed Artificial Intelligence (AI) across cloud-edge environments by allowing collaborative model training without centralizing data. In cross-device deployments, FL systems face strict communication and participation constraints, as well as strong non-independent and identically distributed (non-IID) data that degrades convergence and model quality. Since only a subset of devices (a.k.a clients) can participate per training round, intelligent client selection becomes a key systems challenge. This paper proposes FedLECC (Federated Learning with Enhanced Cluster Choice), a lightweight, cluster-aware, and loss-guided client selection strategy for cross-device FL. FedLECC groups clients by label-distribution similarity and prioritizes clusters and clients with higher local loss, enabling the selection of a small yet informative and diverse set of clients. Experimental results under severe label skew show that FedLECC improves test accuracy by up to 12%, while reducing communication rounds by approximately 22% and overall communication overhead by up to 50% compared to strong baselines. These results demonstrate that informed client selection improves the efficiency and scalability of FL workloads in cloud-edge systems.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
non-IID data
client selection
cross-device
label skew
Innovation

Methods, ideas, or system contributions that make the work stand out.

client selection
cluster-aware
loss-guided
non-IID
federated learning
🔎 Similar Papers
No similar papers found.