Hybrid Least Squares/Gradient Descent Methods for DeepONets

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In DeepONet training, joint least-squares optimization of the branch network’s final-layer parameters becomes computationally intractable due to combinatorial explosion. To address this, we propose a hybrid optimization framework that exploits the linear dependence of the output on these parameters: the final-layer weights are decoupled into an analytically solvable least-squares subproblem, while all other parameters are updated via gradient descent. Furthermore, the large-scale system is decomposed into two low-dimensional subproblems—branch and trunk—enabling efficient matrix factorization and straightforward incorporation of ℓ² regularization. The method natively supports physics-informed constraints and applies seamlessly to both supervised and unsupervised learning settings. Experiments demonstrate that our algorithm significantly accelerates convergence, reduces memory footprint and computational cost, and maintains numerical stability and generalization performance.

Technology Category

Application Category

📝 Abstract
We propose an efficient hybrid least squares/gradient descent method to accelerate DeepONet training. Since the output of DeepONet can be viewed as linear with respect to the last layer parameters of the branch network, these parameters can be optimized using a least squares (LS) solve, and the remaining hidden layer parameters are updated by means of gradient descent form. However, building the LS system for all possible combinations of branch and trunk inputs yields a prohibitively large linear problem that is infeasible to solve directly. To address this issue, our method decomposes the large LS system into two smaller, more manageable subproblems $unicode{x2014}$ one for the branch network and one for the trunk network $unicode{x2014}$ and solves them separately. This method is generalized to a broader type of $L^2$ loss with a regularization term for the last layer parameters, including the case of unsupervised learning with physics-informed loss.
Problem

Research questions and friction points this paper is trying to address.

Accelerating DeepONet training with hybrid optimization method
Solving large linear systems from branch-trunk network combinations
Generalizing approach to regularized L2 loss and unsupervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid least squares and gradient descent training
Decomposed large system into smaller subproblems
Generalized to L2 loss with regularization
🔎 Similar Papers
No similar papers found.