π€ AI Summary
This work addresses the challenge of simultaneously achieving stability, interpretability, and generalization in time series forecasting by proposing a novel architecture that integrates learnable Koopman operators with Transformer-based backbones such as PatchTST, Informer, and Autoformer. By designing four variants of the Koopman operator, the method enables explicit control over the spectral properties, stability, and rank of the linear transition operator within deep forecasting models for the first time, allowing flexible interpolation between strictly stable and unconstrained dynamics. Experiments demonstrate that the approach significantly improves the bias-variance trade-off, numerical conditioning, and interpretability of latent dynamics across multi-horizon forecasting tasks, effectively combining theoretical guarantees with data-driven flexibility.
π Abstract
This paper proposes a unified family of learnable Koopman operator parameterizations that integrate linear dynamical systems theory with modern deep learning forecasting architectures. We introduce four learnable Koopman variants-scalar-gated, per-mode gated, MLP-shaped spectral mapping, and low-rank Koopman operators which generalize and interpolate between strictly stable Koopman operators and unconstrained linear latent dynamics. Our formulation enables explicit control over the spectrum, stability, and rank of the linear transition operator while retaining compatibility with expressive nonlinear backbones such as Patchtst, Autoformer, and Informer. We evaluate the proposed operators in a large-scale benchmark that also includes LSTM, DLinear, and simple diagonal State-Space Models (SSMs), as well as lightweight transformer variants. Experiments across multiple horizons and patch lengths show that learnable Koopman models provide a favorable bias-variance trade-off, improved conditioning, and more interpretable latent dynamics. We provide a full spectral analysis, including eigenvalue trajectories, stability envelopes, and learned spectral distributions. Our results demonstrate that learnable Koopman operators are effective, stable, and theoretically principled components for deep forecasting.