🤖 AI Summary
To address the low accuracy and poor generalization of Graph Neural Networks (GNNs) in large-scale Computational Fluid Dynamics (CFD) simulations, this paper introduces the first physics-informed GNN masked pretraining paradigm. The method randomly masks up to 40% of mesh nodes during pretraining to enforce learning of robust fluid-dynamic representations, incorporating an asymmetric encoder-decoder architecture, gated MLPs, and physics-constrained mesh modeling. It enables joint pretraining across multiple CFD datasets, significantly improving long-horizon prediction accuracy and cross-task generalization. Evaluated on seven CFD benchmark datasets, the approach achieves state-of-the-art (SOTA) performance. In a large-scale 3D intracranial aneurysm simulation with over 250,000 mesh nodes, it reduces prediction error by 60%, enhances training efficiency, and maintains identical computational cost.
📝 Abstract
We introduce a novel masked pre-training technique for graph neural networks (GNNs) applied to computational fluid dynamics (CFD) problems. By randomly masking up to 40% of input mesh nodes during pre-training, we force the model to learn robust representations of complex fluid dynamics. We pair this masking strategy with an asymmetric encoder-decoder architecture and gated multi-layer perceptrons to further enhance performance. The proposed method achieves state-of-the-art results on seven CFD datasets, including a new challenging dataset of 3D intracranial aneurysm simulations with over 250,000 nodes per mesh. Moreover, it significantly improves model performance and training efficiency across such diverse range of fluid simulation tasks. We demonstrate improvements of up to 60% in long-term prediction accuracy compared to previous best models, while maintaining similar computational costs. Notably, our approach enables effective pre-training on multiple datasets simultaneously, significantly reducing the time and data required to achieve high performance on new tasks. Through extensive ablation studies, we provide insights into the optimal masking ratio, architectural choices, and training strategies.