🤖 AI Summary
Neural operators—deep models mapping between function spaces rather than vector spaces—lack open-source, discretization-agnostic implementations with theoretical convergence guarantees. To address this gap, we introduce NeuralOperator, the first modular and extensible Python library for neural operators built on PyTorch. It systematically supports state-of-the-art architectures—including Fourier Neural Operators (FNO) and Multipole Graph Neural Operators (MGNO)—and enables training and inference with functional inputs/outputs under diverse discretizations while rigorously ensuring discretization consistency and convergence. Through a unified API, comprehensive test coverage, and an end-to-end deployment toolchain, NeuralOperator significantly lowers the barrier to adopting neural operators in scientific computing tasks such as partial differential equation solving. The library bridges cutting-edge representational capacity with production-grade engineering robustness, making it both research-ready and deployable in real-world applications.
📝 Abstract
We present NeuralOperator, an open-source Python library for operator learning. Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces. They can be trained and inferenced on input and output functions given at various discretizations, satisfying a discretization convergence properties. Built on top of PyTorch, NeuralOperator provides all the tools for training and deploying neural operator models, as well as developing new ones, in a high-quality, tested, open-source package. It combines cutting-edge models and customizability with a gentle learning curve and simple user interface for newcomers.