Adaptive Deadline and Batch Layered Synchronized Federated Learning

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning over heterogeneous edge devices, stragglers impose severe synchronization latency bottlenecks. Method: This paper proposes a dynamic co-optimization framework that jointly optimizes the per-round deadline and client-tiered batch sizes—first of its kind. It formulates a time-constrained L2 convergence distance minimization problem, incorporates exponential computation-time modeling, and employs tiered parameter aggregation. Theoretical analysis proves unbiased updates and bounded gradient variance. Contribution/Results: Unlike prior approaches treating round duration and local workload as static, our method enables adaptive scheduling under strict time constraints. Experiments demonstrate significant improvements in convergence speed and final model accuracy over baselines—including fixed deadlines, client selection, and static tiered aggregation—on heterogeneous edge environments.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training across distributed edge devices while preserving data privacy, and typically operates in a round-based synchronous manner. However, synchronous FL suffers from latency bottlenecks due to device heterogeneity, where slower clients (stragglers) delay or degrade global updates. Prior solutions, such as fixed deadlines, client selection, and layer-wise partial aggregation, alleviate the effect of stragglers, but treat round timing and local workload as static parameters, limiting their effectiveness under strict time constraints. We propose ADEL-FL, a novel framework that jointly optimizes per-round deadlines and user-specific batch sizes for layer-wise aggregation. Our approach formulates a constrained optimization problem minimizing the expected L2 distance to the global optimum under total training time and global rounds. We provide a convergence analysis under exponential compute models and prove that ADEL-FL yields unbiased updates with bounded variance. Extensive experiments demonstrate that ADEL-FL outperforms alternative methods in both convergence rate and final accuracy under heterogeneous conditions.
Problem

Research questions and friction points this paper is trying to address.

Optimizes deadlines and batch sizes for straggler mitigation
Minimizes L2 distance to global optimum under time constraints
Enhances convergence and accuracy in heterogeneous federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive per-round deadlines optimization
User-specific batch sizes adjustment
Layer-wise aggregation for stragglers
🔎 Similar Papers
No similar papers found.