🤖 AI Summary
Solving parametric partial differential equations (PDEs) continuously over arbitrary geometric domains remains challenging due to the need for generalization across unseen parameters and geometries, accurate gradient computation, and robust handling of solution discontinuities.
Method: We propose implicit Finite Operator Learning (iFOL), a physics-informed framework that integrates physical constraints with a second-order meta-learning algorithm (a MAML variant) to construct an end-to-end implicit neural mapping from parameters to solution fields—eliminating conventional encoder-decoder architectures. Crucially, iFOL introduces an energy-based physics loss with discrete residual backpropagation, enabling exact gradient computation w.r.t. parameters and natural support for discontinuity resolution.
Contribution/Results: iFOL achieves high-fidelity continuous parametric solutions for both stationary and time-dependent PDEs; reduces solution gradient error by one order of magnitude; enables zero-shot super-resolution and mesh-free geometric generalization; and demonstrates strong zero-shot transfer to unseen parameters and domain geometries.
📝 Abstract
In this work, we introduce implicit Finite Operator Learning (iFOL) for the continuous and parametric solution of partial differential equations (PDEs) on arbitrary geometries. We propose a physics-informed encoder-decoder network to establish the mapping between continuous parameter and solution spaces. The decoder constructs the parametric solution field by leveraging an implicit neural field network conditioned on a latent or feature code. Instance-specific codes are derived through a PDE encoding process based on the second-order meta-learning technique. In training and inference, a physics-informed loss function is minimized during the PDE encoding and decoding. iFOL expresses the loss function in an energy or weighted residual form and evaluates it using discrete residuals derived from standard numerical PDE methods. This approach results in the backpropagation of discrete residuals during both training and inference. iFOL features several key properties: (1) its unique loss formulation eliminates the need for the conventional encode-process-decode pipeline previously used in operator learning with conditional neural fields for PDEs; (2) it not only provides accurate parametric and continuous fields but also delivers solution-to-parameter gradients without requiring additional loss terms or sensitivity analysis; (3) it can effectively capture sharp discontinuities in the solution; and (4) it removes constraints on the geometry and mesh, making it applicable to arbitrary geometries and spatial sampling (zero-shot super-resolution capability). We critically assess these features and analyze the network's ability to generalize to unseen samples across both stationary and transient PDEs. The overall performance of the proposed method is promising, demonstrating its applicability to a range of challenging problems in computational mechanics.