🤖 AI Summary
Existing geometry-aware optimizers (e.g., Muon) employ fixed per-layer learning rates for norm-constrained layers in deep neural network training, neglecting intra-layer heterogeneity and temporal dynamics of local curvature. Method: We propose a noise-adaptive, layer-wise learning rate scheme—the first to enable intra-layer dynamic learning rate adaptation within geometry-aware optimization. It estimates gradient variance under the dual norm online and integrates this estimate with a norm-constrained linear minimization oracle (LMO) to allocate learning rates driven by gradient noise. Contribution/Results: Our theoretical analysis establishes sharp convergence rate guarantees. Empirical evaluation on Transformer architectures—including LLaMA and GPT—demonstrates that our method significantly outperforms state-of-the-art geometry-aware optimizers, yielding improved training efficiency and enhanced convergence stability.
📝 Abstract
Geometry-aware optimization algorithms, such as Muon, have achieved remarkable success in training deep neural networks (DNNs). These methods leverage the underlying geometry of DNNs by selecting appropriate norms for different layers and updating parameters via norm-constrained linear minimization oracles (LMOs). However, even within a group of layers associated with the same norm, the local curvature can be heterogeneous across layers and vary dynamically over the course of training. For example, recent work shows that sharpness varies substantially across transformer layers and throughout training, yet standard geometry-aware optimizers impose fixed learning rates to layers within the same group, which may be inefficient for DNN training.
In this paper, we introduce a noise-adaptive layerwise learning rate scheme on top of geometry-aware optimization algorithms and substantially accelerate DNN training compared to methods that use fixed learning rates within each group. Our method estimates gradient variance in the dual norm induced by the chosen LMO on the fly, and uses it to assign time-varying noise-adaptive layerwise learning rates within each group. We provide a theoretical analysis showing that our algorithm achieves a sharp convergence rate. Empirical results on transformer architectures such as LLaMA and GPT demonstrate that our approach achieves faster convergence than state-of-the-art optimizers.