Continual Learning through Control Minimization

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses catastrophic forgetting in continual learning by formulating the learning process as a control problem and introducing the "Continual Natural Gradient" method. The approach dynamically balances weight updates by minimizing the control effort required to integrate new tasks while preserving representations of previous tasks through retention signals. Notably, it implicitly recovers the full second-order structure of past tasks without explicitly storing their curvature information, enabling effective task differentiation. By integrating control-theoretic regularization with modeling of neural activity dynamics, the method achieves superior performance over existing replay-free approaches on standard continual learning benchmarks, accurately reconstructing historical task curvatures and significantly mitigating forgetting.

Technology Category

Application Category

📝 Abstract
Catastrophic forgetting remains a fundamental challenge for neural networks when tasks are trained sequentially. In this work, we reformulate continual learning as a control problem where learning and preservation signals compete within neural activity dynamics. We convert regularization penalties into preservation signals that protect prior-task representations. Learning then proceeds by minimizing the control effort required to integrate new tasks while competing with the preservation of prior tasks. At equilibrium, the neural activities produce weight updates that implicitly encode the full prior-task curvature, a property we term the continual-natural gradient, requiring no explicit curvature storage. Experiments confirm that our learning framework recovers true prior-task curvature and enables task discrimination, outperforming existing methods on standard benchmarks without replay.
Problem

Research questions and friction points this paper is trying to address.

catastrophic forgetting
continual learning
neural networks
task sequentiality
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual learning
control minimization
catastrophic forgetting
natural gradient
curvature preservation
🔎 Similar Papers
No similar papers found.