Generalizing Adam To Manifolds For Efficiently Training Transformers

📅 2023-05-26
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing Adam optimizers lack a clear geometric interpretation and cannot be straightforwardly generalized to homogeneous manifolds—such as Stiefel and Grassmann manifolds—hindering efficient training of models (e.g., Transformers) under orthogonal or symplectic constraints. This work proposes the first projection-free, full-order Adam generalization on manifolds: leveraging homogeneous space theory, it exploits the global tangent space structure to realize natural gradient mapping without explicit retraction or projection. The method endows Adam with a rigorous differential-geometric interpretation and a transferable manifold-adaptation mechanism. Experiments demonstrate that, when training orthogonal-constrained Transformers, our approach achieves machine precision, exhibits enhanced convergence stability, and significantly accelerates training. This establishes a new optimization paradigm for neural networks subject to manifold constraints.
📝 Abstract
One of the primary reasons behind the success of neural networks has been the emergence of an array of new, highly-successful optimizers, perhaps most importantly the Adam optimizer. It is widely used for training neural networks, yet notoriously hard to interpret. Lacking a clear physical intuition, Adam is difficult to generalize to manifolds. Some attempts have been made to directly apply parts of the Adam algorithm to manifolds or to find an underlying structure, but a full generalization has remained elusive. In this work a new approach is presented that leverages the special structure of the manifolds which are relevant for optimization of neural networks, such as the Stiefel manifold, the symplectic Stiefel manifold, the Grassmann manifold and the symplectic Grassmann manifold: all of these are homogeneous spaces and as such admit a global tangent space representation. This global tangent space representation is used to perform all of the steps in the Adam optimizer and we are able to fully generalize the optimizer to manifolds without a projection step. The resulting algorithm is then applied to train a transformer for which orthogonality constraints are enforced up to machine precision and we observe significant speed-ups in the training process.
Problem

Research questions and friction points this paper is trying to address.

Generalize Adam optimizer to manifolds efficiently
Train transformers with orthogonality constraints precisely
Achieve training speed-ups on homogeneous space manifolds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalizes Adam optimizer to manifold structures
Uses global tangent space for Lie subspace representation
Trains transformers with orthogonality constraints efficiently
🔎 Similar Papers
No similar papers found.
B
Benedikt Brantner
Max-Planck-Institut für Plasmaphysik, Boltzmannstraße 2, 85748 Garching, Deutschland and Technische Universität München, Zentrum Mathematik, Boltzmannstraße 3, 85748 Garching, Deutschland