Transformers for dynamical systems learn transfer operators in-context

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how to enable models to generalize to unseen dynamical systems and achieve accurate predictions without retraining. By training a compact two-layer, single-head Transformer to model dynamical systems—integrating delay embeddings, attention mechanisms, transfer operators, and dynamical manifold analysis—the study systematically evaluates its zero-shot predictive capability. The authors identify an early-training trade-off between in-distribution and out-of-distribution performance and, for the first time, reveal that the attention mechanism facilitates cross-system generalization by recognizing global attractor structures. The model effectively captures long-term invariant sets on high-dimensional dynamical manifolds and achieves short-term accurate predictions on previously unseen physical systems, thereby demonstrating the feasibility and potential of in-context learning for dynamical system modeling.

Technology Category

Application Category

📝 Abstract
Large-scale foundation models for scientific machine learning adapt to physical settings unseen during training, such as zero-shot transfer between turbulent scales. This phenomenon, in-context learning, challenges conventional understanding of learning and adaptation in physical systems. Here, we study in-context learning of dynamical systems in a minimal setting: we train a small two-layer, single-head transformer to forecast one dynamical system, and then evaluate its ability to forecast a different dynamical system without retraining. We discover an early tradeoff in training between in-distribution and out-of-distribution performance, which manifests as a secondary double descent phenomenon. We discover that attention-based models apply a transfer-operator forecasting strategy in-context. They (1) lift low-dimensional time series using delay embedding, to detect the system's higher-dimensional dynamical manifold, and (2) identify and forecast long-lived invariant sets that characterize the global flow on this manifold. Our results clarify the mechanism enabling large pretrained models to forecast unseen physical systems at test without retraining, and they illustrate the unique ability of attention-based models to leverage global attractor information in service of short-term forecasts.
Problem

Research questions and friction points this paper is trying to address.

in-context learning
dynamical systems
transfer operators
attention-based models
zero-shot transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
transfer operator
delay embedding
dynamical manifold
attention mechanism
🔎 Similar Papers
No similar papers found.
A
Anthony Bao
Department of Electrical & Computer Engineering, The University of Texas at Austin, Austin, Texas 78712, USA
J
Jeffrey Lai
The Oden Institute, The University of Texas at Austin, Austin, Texas 78712, USA
William Gilpin
William Gilpin
The University of Texas at Austin