🤖 AI Summary
This work investigates how to enable models to generalize to unseen dynamical systems and achieve accurate predictions without retraining. By training a compact two-layer, single-head Transformer to model dynamical systems—integrating delay embeddings, attention mechanisms, transfer operators, and dynamical manifold analysis—the study systematically evaluates its zero-shot predictive capability. The authors identify an early-training trade-off between in-distribution and out-of-distribution performance and, for the first time, reveal that the attention mechanism facilitates cross-system generalization by recognizing global attractor structures. The model effectively captures long-term invariant sets on high-dimensional dynamical manifolds and achieves short-term accurate predictions on previously unseen physical systems, thereby demonstrating the feasibility and potential of in-context learning for dynamical system modeling.
📝 Abstract
Large-scale foundation models for scientific machine learning adapt to physical settings unseen during training, such as zero-shot transfer between turbulent scales. This phenomenon, in-context learning, challenges conventional understanding of learning and adaptation in physical systems. Here, we study in-context learning of dynamical systems in a minimal setting: we train a small two-layer, single-head transformer to forecast one dynamical system, and then evaluate its ability to forecast a different dynamical system without retraining. We discover an early tradeoff in training between in-distribution and out-of-distribution performance, which manifests as a secondary double descent phenomenon. We discover that attention-based models apply a transfer-operator forecasting strategy in-context. They (1) lift low-dimensional time series using delay embedding, to detect the system's higher-dimensional dynamical manifold, and (2) identify and forecast long-lived invariant sets that characterize the global flow on this manifold. Our results clarify the mechanism enabling large pretrained models to forecast unseen physical systems at test without retraining, and they illustrate the unique ability of attention-based models to leverage global attractor information in service of short-term forecasts.