Stochastic Operator Network: A Stochastic Maximum Principle Based Approach to Operator Learning

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of uncertainty quantification in operator learning. We propose the Stochastic Operator Network (SON), which models the branch net as a stochastic differential equation (SDE) and employs an associated backward SDE (BSDE) for gradient propagation. SON is the first operator learning framework to incorporate the stochastic maximum principle: it replaces conventional loss gradients with Hamiltonian gradients during stochastic gradient descent, thereby inherently embedding uncertainty estimation into parameter updates. The method integrates stochastic neural networks, the DeepONet architecture, SDE-based modeling, and BSDE-based adjoint backpropagation. Extensive evaluation on noisy 2D/3D PDE operator learning tasks—including Burgers’, Darcy, and Navier–Stokes equations—demonstrates that SON significantly improves uncertainty calibration and predictive robustness compared to deterministic and existing probabilistic baselines. Our approach establishes a new paradigm for trustworthy, uncertainty-aware operator learning.

Technology Category

Application Category

📝 Abstract
We develop a novel framework for uncertainty quantification in operator learning, the Stochastic Operator Network (SON). SON combines the stochastic optimal control concepts of the Stochastic Neural Network (SNN) with the DeepONet. By formulating the branch net as an SDE and backpropagating through the adjoint BSDE, we replace the gradient of the loss function with the gradient of the Hamiltonian from Stohastic Maximum Principle in the SGD update. This allows SON to learn the uncertainty present in operators through its diffusion parameters. We then demonstrate the effectiveness of SON when replicating several noisy operators in 2D and 3D.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in operator learning
Combine stochastic optimal control with DeepONet
Learn operator uncertainty through diffusion parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines SNN and DeepONet for operator learning
Uses SDE and adjoint BSDE for backpropagation
Applies Stochastic Maximum Principle in SGD
🔎 Similar Papers
No similar papers found.
R
Ryan Bausback
J
Jingqiao Tang
L
Lu Lu
F
Feng Bao
Toan Huynh
Toan Huynh
University of Chicago, University of Houston, Theranos, Intellectual Ventures