🤖 AI Summary
To address catastrophic forgetting and slow adaptation in online learning under non-stationary time series and abrupt paradigm shifts, this paper proposes an online learning algorithm grounded in the critical state of chaotic dynamics. The method dynamically regulates neural network training via real-time control of the maximum Lyapunov exponent (MLE), steering the system to operate persistently near the “edge of chaos” (MLE ≈ 0). This regime simultaneously preserves stability of learned knowledge while enabling flexible exploration of the “adjacent possible” in local solution spaces. The approach integrates chaotic modeling, online MLE estimation, and adaptive regularization. Empirical evaluation on a Lorenz system with sudden parameter shifts demonstrates that the proposed method reduces test loss by approximately 96% compared to standard online training, significantly enhancing robustness to distributional shifts and accelerating model recalibration.
📝 Abstract
Handling regime shifts and non-stationary time series in deep learning systems presents a significant challenge. In the case of online learning, when new information is introduced, it can disrupt previously stored data and alter the model's overall paradigm, especially with non-stationary data sources. Therefore, it is crucial for neural systems to quickly adapt to new paradigms while preserving essential past knowledge relevant to the overall problem. In this paper, we propose a novel training algorithm for neural networks called extit{Lyapunov Learning}. This approach leverages the properties of nonlinear chaotic dynamical systems to prepare the model for potential regime shifts. Drawing inspiration from Stuart Kauffman's Adjacent Possible theory, we leverage local unexplored regions of the solution space to enable flexible adaptation. The neural network is designed to operate at the edge of chaos, where the maximum Lyapunov exponent, indicative of a system's sensitivity to small perturbations, evolves around zero over time. Our approach demonstrates effective and significant improvements in experiments involving regime shifts in non-stationary systems. In particular, we train a neural network to deal with an abrupt change in Lorenz's chaotic system parameters. The neural network equipped with Lyapunov learning significantly outperforms the regular training, increasing the loss ratio by about $96%$.