๐ค AI Summary
This work addresses the stability-plasticity dilemma in deep neural networks trained on non-stationary data, where stability refers to retaining previously acquired knowledge and plasticity denotes the ability to adapt to new tasks. The authors propose FIRE, a novel method that explicitly models stability as the squared Frobenius norm of weight deviations and plasticity as deviation from isometry. By formulating this trade-off as a constrained optimization problem during network reinitialization, FIRE balances both objectives in a principled manner. The resulting problem is efficiently solved using NewtonโSchulz iterations. Extensive experiments demonstrate that FIRE significantly outperforms both naive training and standard reinitialization strategies across diverse benchmarks, including continual visual learning with ResNet-18, language modeling with GPT-0.1B, and reinforcement learning with SAC and DQN, thereby achieving effective co-optimization of stability and plasticity.
๐ Abstract
Deep neural networks trained on nonstationary data must balance stability (i.e., retaining prior knowledge) and plasticity (i.e., adapting to new tasks). Standard reinitialization methods, which reinitialize weights toward their original values, are widely used but difficult to tune: conservative reinitializations fail to restore plasticity, while aggressive ones erase useful knowledge. We propose FIRE, a principled reinitialization method that explicitly balances the stability-plasticity tradeoff. FIRE quantifies stability through Squared Frobenius Error (SFE), measuring proximity to past weights, and plasticity through Deviation from Isometry (DfI), reflecting weight isotropy. The reinitialization point is obtained by solving a constrained optimization problem, minimizing SFE subject to DfI being zero, which is efficiently approximated by Newton-Schulz iteration. FIRE is evaluated on continual visual learning (CIFAR-10 with ResNet-18), language modeling (OpenWebText with GPT-0.1B), and reinforcement learning (HumanoidBench with SAC and Atari games with DQN). Across all domains, FIRE consistently outperforms both naive training without intervention and standard reinitialization methods, demonstrating effective balancing of the stability-plasticity tradeoff.