Lyapunov Learning at the Onset of Chaos

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting and slow adaptation in online learning under non-stationary time series and abrupt paradigm shifts, this paper proposes an online learning algorithm grounded in the critical state of chaotic dynamics. The method dynamically regulates neural network training via real-time control of the maximum Lyapunov exponent (MLE), steering the system to operate persistently near the “edge of chaos” (MLE ≈ 0). This regime simultaneously preserves stability of learned knowledge while enabling flexible exploration of the “adjacent possible” in local solution spaces. The approach integrates chaotic modeling, online MLE estimation, and adaptive regularization. Empirical evaluation on a Lorenz system with sudden parameter shifts demonstrates that the proposed method reduces test loss by approximately 96% compared to standard online training, significantly enhancing robustness to distributional shifts and accelerating model recalibration.

Technology Category

Application Category

📝 Abstract
Handling regime shifts and non-stationary time series in deep learning systems presents a significant challenge. In the case of online learning, when new information is introduced, it can disrupt previously stored data and alter the model's overall paradigm, especially with non-stationary data sources. Therefore, it is crucial for neural systems to quickly adapt to new paradigms while preserving essential past knowledge relevant to the overall problem. In this paper, we propose a novel training algorithm for neural networks called extit{Lyapunov Learning}. This approach leverages the properties of nonlinear chaotic dynamical systems to prepare the model for potential regime shifts. Drawing inspiration from Stuart Kauffman's Adjacent Possible theory, we leverage local unexplored regions of the solution space to enable flexible adaptation. The neural network is designed to operate at the edge of chaos, where the maximum Lyapunov exponent, indicative of a system's sensitivity to small perturbations, evolves around zero over time. Our approach demonstrates effective and significant improvements in experiments involving regime shifts in non-stationary systems. In particular, we train a neural network to deal with an abrupt change in Lorenz's chaotic system parameters. The neural network equipped with Lyapunov learning significantly outperforms the regular training, increasing the loss ratio by about $96%$.
Problem

Research questions and friction points this paper is trying to address.

Handling regime shifts in non-stationary time series
Adapting to new paradigms while preserving past knowledge
Training neural networks at the edge of chaos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Lyapunov Learning for neural networks
Leverages chaotic systems for regime shifts
Operates at edge of chaos for adaptation
🔎 Similar Papers
No similar papers found.
M
Matteo Benati
Department of Computer, Automatic and Management Engineering. Sapienza University, Via Ariosto 25, Rome, Italy
A
Alessandro Londei
Sony Computer Science Laboratories - Rome. Joint Initiative CREF-SONY, Centro Ricerche Enrico Fermi. Via Panisperna 89/A, 00184, Rome, Italy
D
Denise Lanzieri
Sony Computer Science Laboratories - Rome. Joint Initiative CREF-SONY, Centro Ricerche Enrico Fermi. Via Panisperna 89/A, 00184, Rome, Italy
Vittorio Loreto
Vittorio Loreto
Professor of Physics, Sapienza University of Rome
PhysicsComplex SystemsSocial Dynamics