🤖 AI Summary
Low-resolution ocean simulations rely on parameterizations to represent subgrid-scale processes, yet the sensitivity of these parameterizations remains difficult to quantify—hindering model calibration and observational constraint. To address this, we propose an ensemble-learning-based neural surrogate modeling framework that integrates deep neural networks, automatic differentiation, and large-scale hyperparameter optimization. Crucially, it jointly estimates both function outputs and their gradients—even without ground-truth gradient labels—and, for the first time, quantifies epistemic uncertainty in both simultaneously. The method substantially improves accuracy, robustness, and reliability in forward prediction, long-term autoregressive simulation, and adjoint-based sensitivity estimation. It thus provides an interpretable, computationally efficient, and uncertainty-aware analytical tool for optimizing ocean model parameterizations.
📝 Abstract
Accurate simulations of the oceans are crucial in understanding the Earth system. Despite their efficiency, simulations at lower resolutions must rely on various uncertain parameterizations to account for unresolved processes. However, model sensitivity to parameterizations is difficult to quantify, making it challenging to tune these parameterizations to reproduce observations. Deep learning surrogates have shown promise for efficient computation of the parametric sensitivities in the form of partial derivatives, but their reliability is difficult to evaluate without ground truth derivatives. In this work, we leverage large-scale hyperparameter search and ensemble learning to improve both forward predictions, autoregressive rollout, and backward adjoint sensitivity estimation. Particularly, the ensemble method provides epistemic uncertainty of function value predictions and their derivatives, providing improved reliability of the neural surrogates in decision making.