Higher-Order LaSDI: Reduced Order Modeling with Multiple Time Derivatives

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe degradation in prediction accuracy and stability of reduced-order models (ROMs) for long-time evolution of complex partial differential equations (PDEs), this work proposes a high-order low-dimensional modeling paradigm. The method explicitly incorporates multiple temporal derivatives to characterize system dynamics and introduces a novel Rollout loss function to enforce multi-step temporal consistency during optimization. We pioneer a joint training framework integrating high-order finite-difference discretizations with the Rollout loss, thereby overcoming the fundamental accuracy bottleneck of conventional ROMs in long-horizon prediction. The approach synergistically combines proper orthogonal decomposition (POD), supervised neural network modeling, and high-order temporal discretization. Evaluated on the 2D Burgers equation, the proposed method reduces long-time simulation error by over 60% and accelerates single-step inference by three orders of magnitude compared to baseline ROMs.

Technology Category

Application Category

📝 Abstract
Solving complex partial differential equations is vital in the physical sciences, but often requires computationally expensive numerical methods. Reduced-order models (ROMs) address this by exploiting dimensionality reduction to create fast approximations. While modern ROMs can solve parameterized families of PDEs, their predictive power degrades over long time horizons. We address this by (1) introducing a flexible, high-order, yet inexpensive finite-difference scheme and (2) proposing a Rollout loss that trains ROMs to make accurate predictions over arbitrary time horizons. We demonstrate our approach on the 2D Burgers equation.
Problem

Research questions and friction points this paper is trying to address.

Improves long-term predictive accuracy of reduced-order models for PDEs
Introduces a flexible high-order finite-difference scheme for efficiency
Proposes a Rollout loss to train models over arbitrary time horizons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flexible high-order finite-difference scheme for efficiency
Rollout loss trains models for long-term accuracy
Demonstrated on 2D Burgers equation for validation
🔎 Similar Papers
No similar papers found.
R
Robert Stephany
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, Ca 94550
W
William Michael Anderson
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, Ca 94550
Youngsoo Choi
Youngsoo Choi
Research Scientist, LLNL
Numerical linear algebraNumerical optimizationModel order reductionDesign optimizationMachine learning