Mitigating Staleness in Asynchronous Pipeline Parallelism via Basis Rotation

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In asynchronous pipeline parallelism, gradient staleness grows linearly with pipeline depth, causing adaptive optimizers to fail due to misalignment between the Hessian eigenvector basis and the coordinate basis, severely degrading convergence speed and scalability. This work is the first to reveal the coupling effect between gradient staleness and basis misalignment and proposes a Hessian eigenvector–based basis rotation mechanism that corrects stale gradients to restore curvature awareness. The method integrates seamlessly into Adam-family optimizers and, when applied to billion-parameter language model training, reduces the number of iterations required to reach a target loss by 76.8% compared to the best asynchronous baseline, substantially improving the scalability of asynchronous training.

Technology Category

Application Category

📝 Abstract
Asynchronous pipeline parallelism maximizes hardware utilization by eliminating the pipeline bubbles inherent in synchronous execution, offering a path toward efficient large-scale distributed training. However, this efficiency gain can be compromised by gradient staleness, where the immediate model updates with delayed gradients introduce noise into the optimization process. Crucially, we identify a critical, yet often overlooked, pathology: this delay scales linearly with pipeline depth, fundamentally undermining the very scalability that the method originally intends to provide. In this work, we investigate this inconsistency and bridge the gap by rectifying delayed gradients through basis rotation, restoring scalable asynchronous training while maintaining performance. Specifically, we observe that the deleterious effects of delayed gradients are exacerbated when the Hessian eigenbasis is misaligned with the standard coordinate basis. We demonstrate that this misalignment prevents coordinate-wise adaptive schemes, such as Adam, from effectively leveraging curvature-aware adaptivity. This failure leads to significant oscillations in the optimization trajectory and, consequently, slower convergence. We substantiate these findings through both rigorous theoretical analysis and empirical evaluation. To address this challenge, we propose the use of basis rotation, demonstrating that it effectively mitigates the alignment issue and significantly accelerates convergence in asynchronous settings. For example, our training of a 1B-parameter LLM with basis rotation achieves the same training loss in 76.8% fewer iterations compared to the best-performing asynchronous pipeline parallel training baseline.
Problem

Research questions and friction points this paper is trying to address.

gradient staleness
asynchronous pipeline parallelism
scalability
optimization convergence
Hessian eigenbasis
Innovation

Methods, ideas, or system contributions that make the work stand out.

asynchronous pipeline parallelism
gradient staleness
basis rotation
Hessian eigenbasis
scalable distributed training
🔎 Similar Papers
No similar papers found.
H
Hyunji Jung
POSTECH
S
Sungbin Shin
POSTECH
Namhoon Lee
Namhoon Lee
POSTECH
machine learning