OmniLearn: a Framework for Distributed Deep Learning over Heterogeneous Clusters

📅 2025-03-21
🏛️ IEEE Transactions on Parallel and Distributed Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address straggler effects and stale gradients caused by computational imbalance in distributed deep learning across heterogeneous computing environments (edge/cloud/HPC), this paper proposes the first adaptive batch-size scheduling framework grounded in proportional control theory. The framework integrates runtime worker load sensing, feedback-driven batch-size adjustment, and asynchronous SGD optimization to achieve fine-grained load balancing under dynamic resource conditions. Compared to baseline methods, it reduces training time by 14%–85% and improves model accuracy by up to 6.9% in asynchronous training settings. Its core innovation lies in introducing classical control theory into distributed training scheduling—enabling the first closed-loop, self-adaptive regulation of batch size—that jointly optimizes training efficiency and convergence quality.

Technology Category

Application Category

📝 Abstract
Deep learning systems are optimized for clusters with homogeneous resources. However, heterogeneity is prevalent in computing infrastructure across edge, cloud and HPC. When training neural networks using stochastic gradient descent techniques on heterogeneous resources, performance degrades due to stragglers and stale updates. In this work, we develop an adaptive batch-scaling framework called OmniLearn to mitigate the effects of heterogeneity in distributed training. Our approach is inspired by proportional controllers to balance computation across heterogeneous servers, and works under varying resource availability. By dynamically adjusting worker mini-batches at runtime, OmniLearn reduces training time by 14-85%. We also investigate asynchronous training, where our techniques improve accuracy by up to 6.9%.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance degradation in distributed deep learning on heterogeneous clusters
Mitigates stragglers and stale updates in SGD-based training
Improves training efficiency and accuracy across variable resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive batch-scaling for distributed training
Proportional controllers balance computation
Dynamic mini-batch adjustment reduces training time
🔎 Similar Papers
No similar papers found.