🤖 AI Summary
In scientific machine learning, the longstanding separation between representation learning and numerical solvers hinders unified modeling and generalization for complex dynamical systems. To bridge this gap, we propose the “Latent Twins” framework, which constructs an implicit surrogate operator for differential equations in a learned latent space—mathematically formalizing the physical digital twin concept for the first time. Our approach unifies forward modeling, inverse problems, dimensionality reduction, and operator approximation under a single variational principle, integrating deep representation learning with operator approximation. It enables single-step temporal extrapolation, ensures interpretability and theoretical guarantees, and natively supports data assimilation, control, and uncertainty quantification. Evaluated on ordinary differential equations, the shallow-water equations, and real-world meteorological data, Latent Twins significantly outperforms DeepONet and 4D-Var in generalization accuracy and robustness to observational noise.
📝 Abstract
Over the past decade, scientific machine learning has transformed the development of mathematical and computational frameworks for analyzing, modeling, and predicting complex systems. From inverse problems to numerical PDEs, dynamical systems, and model reduction, these advances have pushed the boundaries of what can be simulated. Yet they have often progressed in parallel, with representation learning and algorithmic solution methods evolving largely as separate pipelines. With emph{Latent Twins}, we propose a unifying mathematical framework that creates a hidden surrogate in latent space for the underlying equations. Whereas digital twins mirror physical systems in the digital world, Latent Twins mirror mathematical systems in a learned latent space governed by operators. Through this lens, classical modeling, inversion, model reduction, and operator approximation all emerge as special cases of a single principle. We establish the fundamental approximation properties of Latent Twins for both ODEs and PDEs and demonstrate the framework across three representative settings: (i) canonical ODEs, capturing diverse dynamical regimes; (ii) a PDE benchmark using the shallow-water equations, contrasting Latent Twin simulations with DeepONet and forecasts with a 4D-Var baseline; and (iii) a challenging real-data geopotential reanalysis dataset, reconstructing and forecasting from sparse, noisy observations. Latent Twins provide a compact, interpretable surrogate for solution operators that evaluate across arbitrary time gaps in a single-shot, while remaining compatible with scientific pipelines such as assimilation, control, and uncertainty quantification. Looking forward, this framework offers scalable, theory-grounded surrogates that bridge data-driven representation learning and classical scientific modeling across disciplines.