Staying Alive: Online Neural Network Maintenance and Systemic Drift

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of efficient online adaptation when system drift occurs post-deployment. To avoid costly full retraining or fine-tuning, we propose a dynamic weight update method grounded in Subset Extended Kalman Filtering (SEKF). SEKF dynamically identifies a critical parameter subset via loss gradient analysis and recursively updates only those weights within an Extended Kalman Filter framework, jointly optimizing accuracy and computational efficiency. The approach significantly reduces sensitivity to hyperparameter tuning. Evaluated on four dynamic regression tasks, it matches or exceeds the accuracy of fine-tuning baselines while reducing per-iteration latency by orders of magnitude. Key advantages include low computational overhead, high temporal responsiveness, and strong robustness to distributional shifts. Overall, SEKF establishes a novel paradigm for online continual learning in neural networks.

Technology Category

Application Category

📝 Abstract
We present the Subset Extended Kalman Filter (SEKF) as a method to update previously trained model weights online rather than retraining or finetuning them when the system a model represents drifts away from the conditions under which it was trained. We identify the parameters to be updated using the gradient of the loss function and use the SEKF to update only these parameters. We compare finetuning and SEKF for online model maintenance in the presence of systemic drift through four dynamic regression case studies and find that the SEKF is able to maintain model accuracy as-well if not better than finetuning while requiring significantly less time per iteration, and less hyperparameter tuning.
Problem

Research questions and friction points this paper is trying to address.

Online neural network maintenance during systemic drift
Updating model weights without retraining or finetuning
Maintaining accuracy efficiently with reduced hyperparameter tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

SEKF updates model weights online
Uses loss gradient to identify parameters
Faster than finetuning with less tuning
🔎 Similar Papers
No similar papers found.
J
Joshua E. Hammond
McKetta Department of Chemical Engineering, The University of Texas at Austin, 200 E. Dean Keeton St. Stop C0400, Austin, 78712, Texas, USA.
T
Tyler Soderstrom
ExxonMobil Technology and Engineering, Spring, Texas, USA.
B
Brian A. Korgel
McKetta Department of Chemical Engineering, The University of Texas at Austin, 200 E. Dean Keeton St. Stop C0400, Austin, 78712, Texas, USA.; Energy Institute, The University of Texas at Austin, 2304 Whitis Ave. Stop C2400, Austin, 78712, Texas, USA.
Michael Baldea
Michael Baldea
Professor, University of Texas at Austin; Editor in Chief, I&ECR
Process Systems EngineeringSmart ManufacturingProcess IntensificationProcess Electrification