How Memory in Optimization Algorithms Implicitly Modifies the Loss

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Memory—i.e., historical gradient dependence—in deep learning optimizers degrades generalization, as exemplified by AdamW’s suboptimal performance. Method: We model memory as an explicit correction term added to a memoryless base optimizer (e.g., SGD), analytically decomposing it into an implicit *inverse regularization* effect; we combine Taylor expansion, iterative dynamical modeling, and gradient decay analysis to characterize this correction. Contribution/Results: We establish the first rigorous theoretical equivalence between optimizer memory and implicit perturbation of the loss function. Specifically, we quantify AdamW’s implicit L₂ inverse regularization and explain Lion’s superior generalization via its absence. Our framework provides a principled, interpretable paradigm for optimizer design—unifying memory effects with loss geometry—and enables systematic analysis of generalization trade-offs in adaptive methods.

Technology Category

Application Category

📝 Abstract
In modern optimization methods used in deep learning, each update depends on the history of previous iterations, often referred to as memory, and this dependence decays fast as the iterates go further into the past. For example, gradient descent with momentum has exponentially decaying memory through exponentially averaged past gradients. We introduce a general technique for identifying a memoryless algorithm that approximates an optimization algorithm with memory. It is obtained by replacing all past iterates in the update by the current one, and then adding a correction term arising from memory (also a function of the current iterate). This correction term can be interpreted as a perturbation of the loss, and the nature of this perturbation can inform how memory implicitly (anti-)regularizes the optimization dynamics. As an application of our theory, we find that Lion does not have the kind of implicit anti-regularization induced by memory that AdamW does, providing a theory-based explanation for Lion's better generalization performance recently documented.
Problem

Research questions and friction points this paper is trying to address.

Identifies memoryless algorithm approximation
Analyzes memory's impact on loss perturbation
Explains Lion's superior generalization performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies memoryless algorithm approximation
Adds correction term for memory effects
Explains Lion's better generalization performance
🔎 Similar Papers
No similar papers found.