🤖 AI Summary
Scientific computing has long struggled to simultaneously achieve accuracy, generalizability, and physical consistency when solving multiscale, multiphysics partial differential equations (PDEs). To address this, we propose the first foundation model framework tailored for scientific computing. Our method introduces: (1) a physics-aligned unified architecture integrating Fourier encoders-decoders with a temporal-dynamics fusion Transformer; (2) a novel joint 1D–2D–3D pretraining paradigm; and (3) the PDE-Aligner module—a physics-constraint-driven fine-tuning mechanism augmented with in-context learning and zero-shot generalization capabilities. Evaluated on the PDEBench benchmark, our framework establishes new state-of-the-art performance across 1D, 2D, and 3D PDE solvers. It significantly enhances cross-scale and cross-physics generalization, enables zero-shot transfer to unseen PDE scenarios, and supports engineering-grade real-time prediction—marking a foundational advance toward physics-informed, scalable scientific AI.
📝 Abstract
Foundation models have revolutionized language modeling, while whether this success is replicated in scientific computing remains unexplored. We present OmniArch, the first prototype aiming at solving multi-scale and multi-physics scientific computing problems with physical alignment. We addressed all three challenges with one unified architecture. Its pre-training stage contains a Fourier Encoder-decoder fading out the disharmony across separated dimensions and a Transformer backbone integrating quantities through temporal dynamics, and the novel PDE-Aligner performs physics-informed fine-tuning under flexible conditions. As far as we know, we first conduct 1D-2D-3D united pre-training on the PDEBench, and it sets not only new performance benchmarks for 1D, 2D, and 3D PDEs but also demonstrates exceptional adaptability to new physics via in-context and zero-shot learning approaches, which supports realistic engineering applications and foresight physics discovery.