MAD-NG: Meta-Auto-Decoder Neural Galerkin Method for Solving Parametric Partial Differential Equations

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization and low efficiency in long-horizon prediction of neural solvers for parametric partial differential equations (PDEs), this paper proposes a spatiotemporally decoupled, meta-enhanced Neural Galerkin method. Our key contributions are: (1) the first integration of the Meta-Auto-Decoder paradigm into the Neural Galerkin framework, enabling second-level adaptive retraining (<5 seconds) under unknown parameter conditions; (2) a stochastic sparse gradient update scheme that reduces computational overhead while preserving physical consistency; and (3) synergistic incorporation of MAML-style meta-learning with an auto-decoder architecture to significantly improve cross-parameter generalization. Evaluated on multiple benchmark parametric PDEs, our method achieves state-of-the-art accuracy and robustness, reduces long-horizon prediction error by over 40%, and cuts computational cost by 60% compared to standard Neural Galerkin approaches.

Technology Category

Application Category

📝 Abstract
Parametric partial differential equations (PDEs) are fundamental for modeling a wide range of physical and engineering systems influenced by uncertain or varying parameters. Traditional neural network-based solvers, such as Physics-Informed Neural Networks (PINNs) and Deep Galerkin Methods, often face challenges in generalization and long-time prediction efficiency due to their dependence on full space-time approximations. To address these issues, we propose a novel and scalable framework that significantly enhances the Neural Galerkin Method (NGM) by incorporating the Meta-Auto-Decoder (MAD) paradigm. Our approach leverages space-time decoupling to enable more stable and efficient time integration, while meta-learning-driven adaptation allows rapid generalization to unseen parameter configurations with minimal retraining. Furthermore, randomized sparse updates effectively reduce computational costs without compromising accuracy. Together, these advancements enable our method to achieve physically consistent, long-horizon predictions for complex parameterized evolution equations with significantly lower computational overhead. Numerical experiments on benchmark problems demonstrate that our methods performs comparatively well in terms of accuracy, robustness, and adaptability.
Problem

Research questions and friction points this paper is trying to address.

Enhances Neural Galerkin Method for parametric PDEs with meta-learning
Improves generalization and efficiency in long-time predictions
Reduces computational cost via space-time decoupling and sparse updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-Auto-Decoder enhances Neural Galerkin Method
Space-time decoupling enables stable time integration
Randomized sparse updates reduce computational costs
🔎 Similar Papers
No similar papers found.
Q
Qiuqi Li
Department of Mathematics, Hunan University, Changsha, 100083, China
Yiting Liu
Yiting Liu
University of California San Diego
EDAVLSI Physical DesignMachine LearningData Privacy Protection
J
Jin Zhao
Academy for Multidisciplinary Studies, Beijing National Center for Applied Mathematics, Capital Normal University, Beijing, 100048, China
W
Wencan Zhu
Department of Mathematics, Hunan University, Changsha, 100083, China