🤖 AI Summary
Existing methods for uncertainty quantification in high-dimensional, spatially correlated scientific data (e.g., surface wind fields) struggle to jointly model aleatoric and epistemic uncertainties, preserve spatial correlations, and maintain computational efficiency. To address this, we propose a super-resolution neural network framework that generates multidimensional Gaussian distributional outputs. Our approach is the first to enable stable training under image-level distributional loss; it introduces a novel Fourier-domain covariance representation to explicitly encode spatial correlation; and incorporates information-sharing regularization to balance image-specific fidelity with global statistical consistency. The framework supports closed-form multidimensional Gaussian outputs, heteroscedastic uncertainty estimation, and efficient sampling. Evaluated on wind speed downscaling, it maintains predictive accuracy while significantly improving uncertainty calibration and spatial structure recovery. The method demonstrates strong potential for generalization to diverse physics-driven models.
📝 Abstract
Accurate quantification of uncertainty in neural network predictions remains a central challenge for scientific applications involving high-dimensional, correlated data. While existing methods capture either aleatoric or epistemic uncertainty, few offer closed-form, multidimensional distributions that preserve spatial correlation while remaining computationally tractable. In this work, we present a framework for training neural networks with a multidimensional Gaussian loss, generating closed-form predictive distributions over outputs with non-identically distributed and heteroscedastic structure. Our approach captures aleatoric uncertainty by iteratively estimating the means and covariance matrices, and is demonstrated on a super-resolution example. We leverage a Fourier representation of the covariance matrix to stabilize network training and preserve spatial correlation. We introduce a novel regularization strategy -- referred to as information sharing -- that interpolates between image-specific and global covariance estimates, enabling convergence of the super-resolution downscaling network trained on image-specific distributional loss functions. This framework allows for efficient sampling, explicit correlation modeling, and extensions to more complex distribution families all without disrupting prediction performance. We demonstrate the method on a surface wind speed downscaling task and discuss its broader applicability to uncertainty-aware prediction in scientific models.