Accelerating Deep Neural Network Training via Distributed Hybrid Order Optimization

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high memory overhead and slow convergence in distributed DNN training under resource-constrained settings, this paper proposes the Distributed Hybrid-Order Optimizer (DHO²). DHO² introduces the first distributed hybrid-order optimization framework, enabling device-level parallel computation of sparse curvature information and integrating gradient-based updates with partial curvature-driven corrections. Unlike conventional distributed first-order or second-order methods, DHO² achieves a near-linear reduction in per-device memory consumption as the number of devices increases, while delivering 1.4×–2.1× end-to-end training speedup. This represents a significant breakthrough in overcoming the memory bottleneck inherent to second-order optimization. The core innovations lie in (i) distributed sparse approximation of curvature matrices and (ii) an asynchronous hybrid-order update mechanism that jointly leverages gradient and curvature signals across devices.

Technology Category

Application Category

📝 Abstract
Scaling deep neural network (DNN) training to more devices can reduce time-to-solution. However, it is impractical for users with limited computing resources. FOSI, as a hybrid order optimizer, converges faster than conventional optimizers by taking advantage of both gradient information and curvature information when updating the DNN model. Therefore, it provides a new chance for accelerating DNN training in the resource-constrained setting. In this paper, we explore its distributed design, namely DHO$_2$, including distributed calculation of curvature information and model update with partial curvature information to accelerate DNN training with a low memory burden. To further reduce the training time, we design a novel strategy to parallelize the calculation of curvature information and the model update on different devices. Experimentally, our distributed design can achieve an approximate linear reduction of memory burden on each device with the increase of the device number. Meanwhile, it achieves $1.4 imessim2.1 imes$ speedup in the total training time compared with other distributed designs based on conventional first- and second-order optimizers.
Problem

Research questions and friction points this paper is trying to address.

Accelerating DNN training with limited computing resources
Distributed hybrid order optimization for faster convergence
Reducing memory burden and training time in distributed settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid optimizer combines gradient and curvature information
Distributed design reduces memory burden per device
Parallel strategy accelerates curvature calculation and updates
🔎 Similar Papers
No similar papers found.
S
Shunxian Gu
National University of Defense Technology, Changsha, China
Chaoqun You
Chaoqun You
Fudan Univeristy
ML/AIwireless networking5G/6GO-RANNTN
Bangbang Ren
Bangbang Ren
National University of Defense Technology
NFVSDNApproximation algorithmSegment Routing
Lailong Luo
Lailong Luo
National University of Defense Technology
Computer networksDistributed SystemsDistributed Learning
J
Junxu Xia
National University of Defense Technology, Changsha, China
D
Deke Guo
National University of Defense Technology, Changsha, China