Heterogeneity-Aware Client Sampling: A Unified Solution for Consistent Federated Learning

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, dual heterogeneity—across client communication capabilities and computational resources—induces misaligned optimization dynamics and objective inconsistency, causing the global model to converge to spurious stationary points deviating from the true optimum. This paper introduces the first unified theoretical framework analyzing communication-computation heterogeneity and proposes FedACS, a general client sampling scheme. FedACS employs a heterogeneity-aware dynamic sampling mechanism that rigorously eliminates all forms of objective inconsistency without modifying local solvers. We prove that FedACS achieves a convergence rate of $O(1/sqrt{R})$ under standard assumptions. Extensive experiments across diverse heterogeneous settings demonstrate that FedACS improves model accuracy by 4.3%–36%, reduces communication overhead by 22%–89%, and decreases per-round computational load by 14%–105%, significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) commonly involves clients with diverse communication and computational capabilities. Such heterogeneity can significantly distort the optimization dynamics and lead to objective inconsistency, where the global model converges to an incorrect stationary point potentially far from the pursued optimum. Despite its critical impact, the joint effect of communication and computation heterogeneity has remained largely unexplored, due to the intrinsic complexity of their interaction. In this paper, we reveal the fundamentally distinct mechanisms through which heterogeneous communication and computation drive inconsistency in FL. To the best of our knowledge, this is the first unified theoretical analysis of general heterogeneous FL, offering a principled understanding of how these two forms of heterogeneity jointly distort the optimization trajectory under arbitrary choices of local solvers. Motivated by these insights, we propose Federated Heterogeneity-Aware Client Sampling, FedACS, a universal method to eliminate all types of objective inconsistency. We theoretically prove that FedACS converges to the correct optimum at a rate of $O(1/sqrt{R})$, even in dynamic heterogeneous environments. Extensive experiments across multiple datasets show that FedACS outperforms state-of-the-art and category-specific baselines by 4.3%-36%, while reducing communication costs by 22%-89% and computation loads by 14%-105%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Addressing heterogeneity in federated learning clients
Analyzing joint impact of communication and computation heterogeneity
Proposing FedACS to eliminate objective inconsistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneity-aware client sampling for FL
Unified analysis of communication and computation heterogeneity
FedACS optimizes convergence and reduces resource costs