🤖 AI Summary
This work addresses key challenges in operator learning for multi-input partial differential equations (PDEs): poor generalization, lack of discrete invariance, and model redundancy. We propose ReBaNO—a data-sparse, adaptive operator learning framework. Methodologically, ReBaNO integrates parsimonious basis construction with generative pretraining principles, employing a rigorous greedy algorithm for offline network architecture design. It is the first operator learning method to guarantee strict discrete invariance—i.e., exact independence from discretization resolution and grid topology. Furthermore, ReBaNO incorporates knowledge distillation, task-specific activation functions, and physics-informed pretraining to substantially reduce model size and online computational cost. Experiments demonstrate that ReBaNO consistently outperforms state-of-the-art baselines—including PCA-Net, DeepONet, FNO, and CNO—on both in-distribution and out-of-distribution benchmarks. Notably, it remains the only operator learning model satisfying strict discrete invariance.
📝 Abstract
We propose a novel data-lean operator learning algorithm, the Reduced Basis Neural Operator (ReBaNO), to solve a group of PDEs with multiple distinct inputs. Inspired by the Reduced Basis Method and the recently introduced Generative Pre-Trained Physics-Informed Neural Networks, ReBaNO relies on a mathematically rigorous greedy algorithm to build its network structure offline adaptively from the ground up. Knowledge distillation via task-specific activation function allows ReBaNO to have a compact architecture requiring minimal computational cost online while embedding physics. In comparison to state-of-the-art operator learning algorithms such as PCA-Net, DeepONet, FNO, and CNO, numerical results demonstrate that ReBaNO significantly outperforms them in terms of eliminating/shrinking the generalization gap for both in- and out-of-distribution tests and being the only operator learning algorithm achieving strict discretization invariance.