Inverse Evolution Data Augmentation for Neural PDE Solvers

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity and low generation efficiency of high-fidelity PDE data in neural operator training, this paper proposes an inverse-evolution data augmentation paradigm. Starting from random initial conditions, it employs a designed high-order explicit backward-in-time integration scheme to invert the evolution process defined by an implicit numerical discretization, thereby efficiently generating high-quality training samples that strictly satisfy discrete physical constraints. This approach overcomes conventional small-step-size limitations, avoiding numerical instability and error accumulation. Evaluated on Burgers, Allen–Cahn, and Navier–Stokes equations with FNO and U-Net architectures, it significantly improves model accuracy and generalization robustness while reducing both data generation and training computational costs. The core contribution is the first differentiable, high-order, and scheme-preserving inverse-evolution modeling framework.

Technology Category

Application Category

📝 Abstract
Neural networks have emerged as promising tools for solving partial differential equations (PDEs), particularly through the application of neural operators. Training neural operators typically requires a large amount of training data to ensure accuracy and generalization. In this paper, we propose a novel data augmentation method specifically designed for training neural operators on evolution equations. Our approach utilizes insights from inverse processes of these equations to efficiently generate data from random initialization that are combined with original data. To further enhance the accuracy of the augmented data, we introduce high-order inverse evolution schemes. These schemes consist of only a few explicit computation steps, yet the resulting data pairs can be proven to satisfy the corresponding implicit numerical schemes. In contrast to traditional PDE solvers that require small time steps or implicit schemes to guarantee accuracy, our data augmentation method employs explicit schemes with relatively large time steps, thereby significantly reducing computational costs. Accuracy and efficacy experiments confirm the effectiveness of our approach. Additionally, we validate our approach through experiments with the Fourier Neural Operator and UNet on three common evolution equations that are Burgers' equation, the Allen-Cahn equation and the Navier-Stokes equation. The results demonstrate a significant improvement in the performance and robustness of the Fourier Neural Operator when coupled with our inverse evolution data augmentation method.
Problem

Research questions and friction points this paper is trying to address.

Neural Operator Training
High-Quality Data Generation
Partial Differential Equations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reverse Evolution Data Augmentation
Neural Operator Optimization
PDE Solving Enhancement
🔎 Similar Papers
No similar papers found.
C
Chaoyu Liu
Department of Applied Mathematical and Theoretical Physics, University of Cambridge
Chris Budd
Chris Budd
Professor of Mathematics, University of Bath
Applied and industrial mathematicsnumerical analysispublic engagement
C
Carola-Bibiane Schonlieb
Department of Applied Mathematical and Theoretical Physics, University of Cambridge