🤖 AI Summary
This paper addresses the challenge of efficient online adaptation when system drift occurs post-deployment. To avoid costly full retraining or fine-tuning, we propose a dynamic weight update method grounded in Subset Extended Kalman Filtering (SEKF). SEKF dynamically identifies a critical parameter subset via loss gradient analysis and recursively updates only those weights within an Extended Kalman Filter framework, jointly optimizing accuracy and computational efficiency. The approach significantly reduces sensitivity to hyperparameter tuning. Evaluated on four dynamic regression tasks, it matches or exceeds the accuracy of fine-tuning baselines while reducing per-iteration latency by orders of magnitude. Key advantages include low computational overhead, high temporal responsiveness, and strong robustness to distributional shifts. Overall, SEKF establishes a novel paradigm for online continual learning in neural networks.
📝 Abstract
We present the Subset Extended Kalman Filter (SEKF) as a method to update previously trained model weights online rather than retraining or finetuning them when the system a model represents drifts away from the conditions under which it was trained. We identify the parameters to be updated using the gradient of the loss function and use the SEKF to update only these parameters. We compare finetuning and SEKF for online model maintenance in the presence of systemic drift through four dynamic regression case studies and find that the SEKF is able to maintain model accuracy as-well if not better than finetuning while requiring significantly less time per iteration, and less hyperparameter tuning.