Multidimensional Distributional Neural Network Output Demonstrated in Super-Resolution of Surface Wind Speed

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for uncertainty quantification in high-dimensional, spatially correlated scientific data (e.g., surface wind fields) struggle to jointly model aleatoric and epistemic uncertainties, preserve spatial correlations, and maintain computational efficiency. To address this, we propose a super-resolution neural network framework that generates multidimensional Gaussian distributional outputs. Our approach is the first to enable stable training under image-level distributional loss; it introduces a novel Fourier-domain covariance representation to explicitly encode spatial correlation; and incorporates information-sharing regularization to balance image-specific fidelity with global statistical consistency. The framework supports closed-form multidimensional Gaussian outputs, heteroscedastic uncertainty estimation, and efficient sampling. Evaluated on wind speed downscaling, it maintains predictive accuracy while significantly improving uncertainty calibration and spatial structure recovery. The method demonstrates strong potential for generalization to diverse physics-driven models.

Technology Category

Application Category

📝 Abstract
Accurate quantification of uncertainty in neural network predictions remains a central challenge for scientific applications involving high-dimensional, correlated data. While existing methods capture either aleatoric or epistemic uncertainty, few offer closed-form, multidimensional distributions that preserve spatial correlation while remaining computationally tractable. In this work, we present a framework for training neural networks with a multidimensional Gaussian loss, generating closed-form predictive distributions over outputs with non-identically distributed and heteroscedastic structure. Our approach captures aleatoric uncertainty by iteratively estimating the means and covariance matrices, and is demonstrated on a super-resolution example. We leverage a Fourier representation of the covariance matrix to stabilize network training and preserve spatial correlation. We introduce a novel regularization strategy -- referred to as information sharing -- that interpolates between image-specific and global covariance estimates, enabling convergence of the super-resolution downscaling network trained on image-specific distributional loss functions. This framework allows for efficient sampling, explicit correlation modeling, and extensions to more complex distribution families all without disrupting prediction performance. We demonstrate the method on a surface wind speed downscaling task and discuss its broader applicability to uncertainty-aware prediction in scientific models.
Problem

Research questions and friction points this paper is trying to address.

Accurately quantify uncertainty in neural network predictions
Generate closed-form multidimensional distributions preserving spatial correlation
Enable efficient sampling and explicit correlation modeling without performance loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multidimensional Gaussian loss for closed-form distributions
Fourier representation stabilizes training and preserves correlation
Information sharing regularization enables convergence of distributional networks
🔎 Similar Papers
No similar papers found.
H
Harrison J. Goldwyn
National Renewable Energy Laboratory
M
Mitchell Krock
University of Missouri
Johann Rudi
Johann Rudi
Virginia Tech
Computational ScienceInverse ProblemsFast Iterative MethodsHigh-Performance Computing
D
Daniel Getter
University of Southern California
Julie Bessac
Julie Bessac
National Renewable Energy Laboratory
statistical modelingmachine learninguncertainty quantification