🤖 AI Summary
Federated learning (FL) in large-scale geographically distributed settings suffers from network latency, clock asynchrony, and inconsistent client update freshness, leading to unstable model convergence and misaligned contribution assessment. To address the lack of explicit quantification of update staleness in existing methods, this paper proposes the first NTP-based explicit temporal semantics framework for FL. It integrates high-precision global timestamps into client updates, formalizes a numerical staleness metric, and introduces a temporal-aware weighted aggregation mechanism—thereby overcoming the temporal blindness inherent in conventional round-driven paradigms. Extensive experiments on a cross-regional distributed FL testbed demonstrate that our approach significantly improves model accuracy and information freshness while ensuring temporal consistency in global model evolution. It consistently outperforms time-agnostic baselines across all evaluated metrics.
📝 Abstract
As Federated Learning (FL) expands to larger and more distributed environments, consistency in training is challenged by network-induced delays, clock unsynchronicity, and variability in client updates. This combination of factors may contribute to misaligned contributions that undermine model reliability and convergence. Existing methods like staleness-aware aggregation and model versioning address lagging updates heuristically, yet lack mechanisms to quantify staleness, especially in latency-sensitive and cross-regional deployments. In light of these considerations, we introduce emph{SyncFed}, a time-aware FL framework that employs explicit synchronization and timestamping to establish a common temporal reference across the system. Staleness is quantified numerically based on exchanged timestamps under the Network Time Protocol (NTP), enabling the server to reason about the relative freshness of client updates and apply temporally informed weighting during aggregation. Our empirical evaluation on a geographically distributed testbed shows that, under emph{SyncFed}, the global model evolves within a stable temporal context, resulting in improved accuracy and information freshness compared to round-based baselines devoid of temporal semantics.