🤖 AI Summary
This work proposes NAMO and NAMO-D, two novel optimizers designed to enhance the training efficiency of large language models by effectively adapting to stochastic gradient noise while preserving the benefits of orthogonal momentum. The key innovation lies in the principled integration of orthogonal momentum with norm-based Adam-style adaptive mechanisms. NAMO unifies directional orthogonality and noise robustness, while NAMO-D further incorporates clipped diagonal scaling to enable neuron-level noise adaptation, aligning with the approximate block-diagonal structure of the Hessian. In GPT-2 pretraining experiments, both optimizers consistently outperform AdamW and Muon, with NAMO-D achieving superior performance due to its additional clipping mechanism.
📝 Abstract
Efficient stochastic optimization typically integrates an update direction that performs well in the deterministic regime with a mechanism adapting to stochastic perturbations. While Adam uses adaptive moment estimates to promote stability, Muon utilizes the weight layers' matrix structure via orthogonalized momentum, showing superior performance in large language model training. We propose a new optimizer and a diagonal extension, NAMO and NAMO-D, providing the first principled integration of orthogonalized momentum with norm-based Adam-type noise adaptation. NAMO scales orthogonalized momentum using a single adaptive stepsize, preserving orthogonality while improving upon Muon at negligible additional cost. NAMO-D instead right-multiplies orthogonalized momentum by a diagonal matrix with clamped entries. This design enables neuron-wise noise adaptation and aligns with the common near block-diagonal Hessian structure. Under standard assumptions, we establish optimal convergence rates for both algorithms in the deterministic setting and show that, in the stochastic setting, their convergence guarantees adapt to the noise level of stochastic gradients. Experiments on pretraining GPT-2 models demonstrate improved performance of both NAMO and NAMO-D compared to the AdamW and Muon baselines, with NAMO-D achieving further gains over NAMO via an additional clamping hyperparameter that balances the competing goals of maintaining a well-conditioned update direction and leveraging fine-grained noise adaptation.