🤖 AI Summary
Neural PDE surrogate models often fail to guarantee energy conservation and long-term stability, compromising physical consistency. Method: This paper proposes the Neural Functional architecture—the first to incorporate Hamiltonian functional modeling into neural PDE surrogates—enabling differentiable mapping directly from function space to scalars (e.g., energy). Integrating operator learning and neural field paradigms, it leverages functional approximation, variational principles, and automatic differentiation to support implicit functional derivative computation and joint optimization of functionals and their gradients. Contribution/Results: The framework significantly improves numerical stability and physical conservation in 1D/2D nonlinear PDE tasks. It maintains high accuracy while ensuring energy consistency over long-time simulations, establishing a new paradigm for physics-consistent neural surrogate modeling.
📝 Abstract
Many architectures for neural PDE surrogates have been proposed in recent years, largely based on neural networks or operator learning. In this work, we derive and propose a new architecture, the Neural Functional, which learns function to scalar mappings. Its implementation leverages insights from operator learning and neural fields, and we show the ability of neural functionals to implicitly learn functional derivatives. For the first time, this allows for an extension of Hamiltonian mechanics to neural PDE surrogates by learning the Hamiltonian functional and optimizing its functional derivatives. We demonstrate that the Hamiltonian Neural Functional can be an effective surrogate model through improved stability and conserving energy-like quantities on 1D and 2D PDEs. Beyond PDEs, functionals are prevalent in physics; functional approximation and learning with its gradients may find other uses, such as in molecular dynamics or design optimization.