🤖 AI Summary
To address high memory overhead and slow convergence in distributed DNN training under resource-constrained settings, this paper proposes the Distributed Hybrid-Order Optimizer (DHO²). DHO² introduces the first distributed hybrid-order optimization framework, enabling device-level parallel computation of sparse curvature information and integrating gradient-based updates with partial curvature-driven corrections. Unlike conventional distributed first-order or second-order methods, DHO² achieves a near-linear reduction in per-device memory consumption as the number of devices increases, while delivering 1.4×–2.1× end-to-end training speedup. This represents a significant breakthrough in overcoming the memory bottleneck inherent to second-order optimization. The core innovations lie in (i) distributed sparse approximation of curvature matrices and (ii) an asynchronous hybrid-order update mechanism that jointly leverages gradient and curvature signals across devices.
📝 Abstract
Scaling deep neural network (DNN) training to more devices can reduce time-to-solution. However, it is impractical for users with limited computing resources. FOSI, as a hybrid order optimizer, converges faster than conventional optimizers by taking advantage of both gradient information and curvature information when updating the DNN model. Therefore, it provides a new chance for accelerating DNN training in the resource-constrained setting. In this paper, we explore its distributed design, namely DHO$_2$, including distributed calculation of curvature information and model update with partial curvature information to accelerate DNN training with a low memory burden. To further reduce the training time, we design a novel strategy to parallelize the calculation of curvature information and the model update on different devices. Experimentally, our distributed design can achieve an approximate linear reduction of memory burden on each device with the increase of the device number. Meanwhile, it achieves $1.4 imessim2.1 imes$ speedup in the total training time compared with other distributed designs based on conventional first- and second-order optimizers.