🤖 AI Summary
This work addresses end-to-end operator learning for infinite-dimensional inverse problems—mapping directly from observational data to solution spaces (including point estimates and posterior distributions) without explicit forward models. We propose a structure-aware probabilistic operator network framework that unifies data-parameter relationships at the measure level, incorporating noise-robust architecture design and measure centering. The method integrates generative data augmentation, deep operator architectures (e.g., DeepONet, FNO), probabilistic loss functions, and learnable priors, thereby synergistically combining classical regularization with data-driven learning. Theoretically grounded and empirically validated, our framework significantly improves generalization and uncertainty quantification across both linear and nonlinear inverse problems. It advances the theoretical foundations and practical applicability of operator learning in computational inversion tasks.
📝 Abstract
Operator learning offers a robust framework for approximating mappings between infinite-dimensional function spaces. It has also become a powerful tool for solving inverse problems in the computational sciences. This chapter surveys methodological and theoretical developments at the intersection of operator learning and inverse problems. It begins by summarizing the probabilistic and deterministic approaches to inverse problems, and pays special attention to emerging measure-centric formulations that treat observed data or unknown parameters as probability distributions. The discussion then turns to operator learning by covering essential components such as data generation, loss functions, and widely used architectures for representing function-to-function maps. The core of the chapter centers on the end-to-end inverse operator learning paradigm, which aims to directly map observed data to the solution of the inverse problem without requiring explicit knowledge of the forward map. It highlights the unique challenge that noise plays in this data-driven inversion setting, presents structure-aware architectures for both point predictions and posterior estimates, and surveys relevant theory for linear and nonlinear inverse problems. The chapter also discusses the estimation of priors and regularizers, where operator learning is used more selectively within classical inversion algorithms.