🤖 AI Summary
In DeepONet training, joint least-squares optimization of the branch network’s final-layer parameters becomes computationally intractable due to combinatorial explosion. To address this, we propose a hybrid optimization framework that exploits the linear dependence of the output on these parameters: the final-layer weights are decoupled into an analytically solvable least-squares subproblem, while all other parameters are updated via gradient descent. Furthermore, the large-scale system is decomposed into two low-dimensional subproblems—branch and trunk—enabling efficient matrix factorization and straightforward incorporation of ℓ² regularization. The method natively supports physics-informed constraints and applies seamlessly to both supervised and unsupervised learning settings. Experiments demonstrate that our algorithm significantly accelerates convergence, reduces memory footprint and computational cost, and maintains numerical stability and generalization performance.
📝 Abstract
We propose an efficient hybrid least squares/gradient descent method to accelerate DeepONet training. Since the output of DeepONet can be viewed as linear with respect to the last layer parameters of the branch network, these parameters can be optimized using a least squares (LS) solve, and the remaining hidden layer parameters are updated by means of gradient descent form. However, building the LS system for all possible combinations of branch and trunk inputs yields a prohibitively large linear problem that is infeasible to solve directly. To address this issue, our method decomposes the large LS system into two smaller, more manageable subproblems $unicode{x2014}$ one for the branch network and one for the trunk network $unicode{x2014}$ and solves them separately. This method is generalized to a broader type of $L^2$ loss with a regularization term for the last layer parameters, including the case of unsupervised learning with physics-informed loss.