π€ AI Summary
This paper addresses the challenge of data-driven modeling for chaotic systems, whose extreme sensitivity to initial conditions hinders reliable prediction. We propose the first pre-trained forecasting model specifically designed for chaotic dynamics. Methodologically, we introduce a novel patched attention architecture grounded in dynamical systems theory, integrating temporal patching-based tokenization and self-attention augmented with dynamical priors; the model is pre-trained on a large-scale, purely synthetic dataset comprising 20,000 evolutionarily synthesized chaotic ordinary differential equations (ODEs). Key contributions include: (1) zero-shot accurate forecasting on unseen real-world chaotic systems (e.g., fluid turbulence, neuronal spiking); (2) the first empirical validation of spontaneous generalization in neural networks for partial differential equation (PDE) forecasting, alongside discovery of a neural differential equation scaling law; and (3) identification of intrinsic nonlinear resonance patterns within attention heads, revealing the modelβs internal dynamical representation mechanism. Our approach significantly extends conventional modelsβ generalization capabilities across dimensions and system classes.
π Abstract
Chaotic systems are intrinsically sensitive to small errors, challenging efforts to construct predictive data-driven models of real-world dynamical systems such as fluid flows or neuronal activity. Prior efforts comprise either specialized models trained separately on individual time series, or foundation models trained on vast time series databases with little underlying dynamical structure. Motivated by dynamical systems theory, we present Panda, Patched Attention for Nonlinear DynAmics. We train Panda on a novel synthetic, extensible dataset of $2 imes 10^4$ chaotic dynamical systems that we discover using an evolutionary algorithm. Trained purely on simulated data, Panda exhibits emergent properties: zero-shot forecasting of unseen real world chaotic systems, and nonlinear resonance patterns in cross-channel attention heads. Despite having been trained only on low-dimensional ordinary differential equations, Panda spontaneously develops the ability to predict partial differential equations without retraining. We demonstrate a neural scaling law for differential equations, underscoring the potential of pretrained models for probing abstract mathematical domains like nonlinear dynamics.