Online Continual Learning for Time Series: a Natural Score-driven Approach

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tension between adapting to dynamically changing environments and mitigating catastrophic forgetting in online time series forecasting by proposing NatSR, a novel framework that reframes neural network optimization as a parameter filtering problem. By uncovering the score-driven nature of natural gradient descent and establishing its information-theoretic optimality, NatSR integrates a Student’s t likelihood to enforce bounded, robust parameter updates. To balance the retention of past knowledge with the assimilation of new information, the framework further incorporates a replay buffer and a dynamic scaling heuristic. Experimental results demonstrate that NatSR achieves superior adaptability, predictive accuracy, and robustness across multiple forecasting tasks, outperforming existing, more complex methods despite employing a notably simpler architecture.

Technology Category

Application Category

📝 Abstract
Online continual learning (OCL) methods adapt to changing environments without forgetting past knowledge. Similarly, online time series forecasting (OTSF) is a real-world problem where data evolve in time and success depends on both rapid adaptation and long-term memory. Indeed, time-varying and regime-switching forecasting models have been extensively studied, offering a strong justification for the use of OCL in these settings. Building on recent work that applies OCL to OTSF, this paper aims to strengthen the theoretical and practical connections between time series methods and OCL. First, we reframe neural network optimization as a parameter filtering problem, showing that natural gradient descent is a score-driven method and proving its information-theoretic optimality. Then, we show that using a Student's t likelihood in addition to natural gradient induces a bounded update, which improves robustness to outliers. Finally, we introduce Natural Score-driven Replay (NatSR), which combines our robust optimizer with a replay buffer and a dynamic scale heuristic that improves fast adaptation at regime drifts. Empirical results demonstrate that NatSR achieves stronger forecasting performance than more complex state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Online Continual Learning
Time Series Forecasting
Regime Switching
Outlier Robustness
Natural Gradient
Innovation

Methods, ideas, or system contributions that make the work stand out.

natural gradient descent
score-driven
online continual learning
robust optimization
time series forecasting
🔎 Similar Papers
No similar papers found.
E
Edoardo Urettini
University of Pisa, Pisa, Italy
D
Daniele Atzeni
IIT-CNR, Pisa, Italy
I
Ioanna-Yvonni Tsaknaki
Scuola Normale Superiore, Pisa, Italy
Antonio Carta
Antonio Carta
Assistant Professor @ Università di Pisa
continual learninglifelong learningdeep learningrecurrent neural networks