🤖 AI Summary
This paper investigates why optimization algorithms exhibit disparate empirical performance and what constitutes their optimality conditions. We formulate the learning rule as an optimal control problem navigating a partially observable loss landscape—unifying gradient descent, momentum, natural gradient, and Adam within a single principled framework for the first time. Our method integrates optimal control theory, differential geometry, and online Bayesian inference to model parameter updates rigorously. Key contributions are: (1) a systematic characterization of implicit prior structures and approximation strategies embedded in diverse optimizers, derived from controllability and observability assumptions; (2) principled derivation of both classical and adaptive algorithms, along with theoretical justification for continual learning techniques such as weight resetting; and (3) a novel optimizer design framework that balances interpretability with scalability. The framework provides a geometric and probabilistic foundation for understanding algorithmic behavior, enabling both analysis and synthesis of optimization methods in non-convex, dynamic settings.
📝 Abstract
Learning rules -- prescriptions for updating model parameters to improve performance -- are typically assumed rather than derived. Why do some learning rules work better than others, and under what assumptions can a given rule be considered optimal? We propose a theoretical framework that casts learning rules as policies for navigating (partially observable) loss landscapes, and identifies optimal rules as solutions to an associated optimal control problem. A range of well-known rules emerge naturally within this framework under different assumptions: gradient descent from short-horizon optimization, momentum from longer-horizon planning, natural gradients from accounting for parameter space geometry, non-gradient rules from partial controllability, and adaptive optimizers like Adam from online Bayesian inference of loss landscape shape. We further show that continual learning strategies like weight resetting can be understood as optimal responses to task uncertainty. By unifying these phenomena under a single objective, our framework clarifies the computational structure of learning and offers a principled foundation for designing adaptive algorithms.