🤖 AI Summary
This paper addresses the convergence rate of the Adam optimizer in training deep neural networks, aiming to rigorously characterize its theoretical advantages over gradient descent and RMSprop.
Method: Leveraging non-convex optimization analysis, local quadratic approximation, and Hessian spectral characterization, the authors derive convergence rates under a strong convexity approximation near local minima.
Contribution/Results: The work provides the first rigorous proof that, in a neighborhood of a local minimum, Adam achieves the optimal accelerated linear convergence rate $(sqrt{kappa}-1)/(sqrt{kappa}+1)$, where $kappa$ is the local condition number—matching the rate of momentum-based methods. In contrast, RMSprop attains only the suboptimal rate $(kappa-1)/(kappa+1)$. This establishes Adam’s strict superiority over adaptive methods lacking momentum, furnishes the first theoretically grounded criterion for optimizer selection based on convergence speed, and corrects long-standing misconceptions regarding relative optimizer performance.
📝 Abstract
Gradient descent based optimization methods are the methods of choice to train deep neural networks in machine learning. Beyond the standard gradient descent method, also suitable modified variants of standard gradient descent involving acceleration techniques such as the momentum method and/or adaptivity techniques such as the RMSprop method are frequently considered optimization methods. These days the most popular of such sophisticated optimization schemes is presumably the Adam optimizer that has been proposed in 2014 by Kingma and Ba. A highly relevant topic of research is to investigate the speed of convergence of such optimization methods. In particular, in 1964 Polyak showed that the standard gradient descent method converges in a neighborhood of a strict local minimizer with rate (x - 1)(x + 1)^{-1} while momentum achieves the (optimal) strictly faster convergence rate (sqrt{x} - 1)(sqrt{x} + 1)^{-1} where x in (1,infty) is the condition number (the ratio of the largest and the smallest eigenvalue) of the Hessian of the objective function at the local minimizer. It is the key contribution of this work to reveal that Adam also converges with the strictly faster convergence rate (sqrt{x} - 1)(sqrt{x} + 1)^{-1} while RMSprop only converges with the convergence rate (x - 1)(x + 1)^{-1}.