🤖 AI Summary
Traditional numerical methods for solving partial differential equations (PDEs) incur high computational costs, while existing deep learning surrogate models exhibit poor generalization—particularly when extrapolating to unseen PDE structures or coefficients.
Method: We propose an equation-aware neural operator framework that explicitly encodes PDE-specific differential operators and coefficient fields into a conditional vector, enabling the model to intrinsically capture structural inductive biases of the governing equations.
Contribution/Results: Our approach achieves, for the first time, stable out-of-distribution generalization across PDE types, parametric variations, and equation forms—including reliable long-term rollout stability. Evaluated on the APEBench 1D PDE benchmark suite against four state-of-the-art baselines, our model significantly outperforms equation-specific surrogates, maintaining high accuracy and robustness on both unseen parameter regimes and entirely novel PDE families.
📝 Abstract
Solving partial differential equations (PDEs) can be prohibitively expensive using traditional numerical methods. Deep learning-based surrogate models typically specialize in a single PDE with fixed parameters. We present a framework for equation-aware emulation that generalizes to unseen PDEs, conditioning a neural model on a vector encoding representing the terms in a PDE and their coefficients. We present a baseline of four distinct modeling technqiues, trained on a family of 1D PDEs from the APEBench suite. Our approach achieves strong performance on parameter sets held out from the training distribution, with strong stability for rollout beyond the training window, and generalization to an entirely unseen PDE. This work was developed as part of a broader effort exploring AI systems that automate the creation of expert-level empirical software for scorable scientific tasks. The data and codebase are available at https://github.com/google-research/generalized-pde-emulator.