The Effect of Architecture During Continual Learning

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of catastrophic forgetting in continual learning, where static neural architectures struggle to adapt to shifting data distributions. The authors propose a unified framework that jointly models network architecture and weights within a Sobolev space, employing bilevel optimization to simultaneously learn both components. To handle parameter dimension mismatches during architectural evolution, they introduce a low-rank knowledge transfer mechanism. Theoretically, they provide the first rigorous proof that weight-only optimization is insufficient to mitigate forgetting, thereby establishing a formal foundation for the co-adaptation of architecture and weights, along with a derivative-free direct search algorithm. Experiments across diverse networks and tasks demonstrate performance improvements of up to two orders of magnitude, significantly alleviating forgetting and enhancing robustness to noise.

Technology Category

Application Category

📝 Abstract
Continual learning is a challenge for models with static architecture, as they fail to adapt to when data distributions evolve across tasks. We introduce a mathematical framework that jointly models architecture and weights in a Sobolev space, enabling a rigorous investigation into the role of neural network architecture in continual learning and its effect on the forgetting loss. We derive necessary conditions for the continual learning solution and prove that learning only model weights is insufficient to mitigate catastrophic forgetting under distribution shifts. Consequently, we prove that by learning the architecture and weights simultaneously at each task, we can reduce catastrophic forgetting. To learn weights and architecture simultaneously, we formulate continual learning as a bilevel optimization problem: the upper level selects an optimal architecture for a given task, while the lower level computes optimal weights via dynamic programming over all tasks. To solve the upper level problem, we introduce a derivative-free direct search algorithm to determine the optimal architecture. Once found, we must transfer knowledge from the current architecture to the optimal one. However, the optimal architecture will result in a weights parameter space different from the current architecture (i.e., dimensions of weights matrices will not match). To bridge the dimensionality gap, we develop a low-rank transfer mechanism to map knowledge across architectures of mismatched dimensions. Empirical studies across regression and classification problems, including feedforward, convolutional, and graph neural networks, demonstrate that learning the optimal architecture and weights simultaneously yields substantially improved performance (up to two orders of magnitude), reduced forgetting, and enhanced robustness to noise compared with static architecture approaches.
Problem

Research questions and friction points this paper is trying to address.

continual learning
catastrophic forgetting
neural architecture
distribution shift
static architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual learning
neural architecture learning
bilevel optimization
catastrophic forgetting
low-rank transfer
🔎 Similar Papers
No similar papers found.