🤖 AI Summary
To address the poor generalization and low efficiency in long-horizon prediction of neural solvers for parametric partial differential equations (PDEs), this paper proposes a spatiotemporally decoupled, meta-enhanced Neural Galerkin method. Our key contributions are: (1) the first integration of the Meta-Auto-Decoder paradigm into the Neural Galerkin framework, enabling second-level adaptive retraining (<5 seconds) under unknown parameter conditions; (2) a stochastic sparse gradient update scheme that reduces computational overhead while preserving physical consistency; and (3) synergistic incorporation of MAML-style meta-learning with an auto-decoder architecture to significantly improve cross-parameter generalization. Evaluated on multiple benchmark parametric PDEs, our method achieves state-of-the-art accuracy and robustness, reduces long-horizon prediction error by over 40%, and cuts computational cost by 60% compared to standard Neural Galerkin approaches.
📝 Abstract
Parametric partial differential equations (PDEs) are fundamental for modeling a wide range of physical and engineering systems influenced by uncertain or varying parameters. Traditional neural network-based solvers, such as Physics-Informed Neural Networks (PINNs) and Deep Galerkin Methods, often face challenges in generalization and long-time prediction efficiency due to their dependence on full space-time approximations. To address these issues, we propose a novel and scalable framework that significantly enhances the Neural Galerkin Method (NGM) by incorporating the Meta-Auto-Decoder (MAD) paradigm. Our approach leverages space-time decoupling to enable more stable and efficient time integration, while meta-learning-driven adaptation allows rapid generalization to unseen parameter configurations with minimal retraining. Furthermore, randomized sparse updates effectively reduce computational costs without compromising accuracy. Together, these advancements enable our method to achieve physically consistent, long-horizon predictions for complex parameterized evolution equations with significantly lower computational overhead. Numerical experiments on benchmark problems demonstrate that our methods performs comparatively well in terms of accuracy, robustness, and adaptability.