Uncertainty propagation through trained multi-layer perceptrons: Exact analytical results

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the precise quantification of uncertainty propagation in single-hidden-layer ReLU multilayer perceptrons (MLPs) with multivariate Gaussian inputs. By leveraging the piecewise linear nature of the ReLU activation and the analytical tractability of Gaussian distributions, the authors derive, for the first time, closed-form expressions for the output mean and variance—without resorting to series expansions or approximation schemes. This work overcomes the reliance on approximations inherent in existing approaches, thereby establishing a rigorous theoretical foundation and providing an efficient computational framework for uncertainty quantification in neural networks.

Technology Category

Application Category

📝 Abstract
We give analytical results for propagation of uncertainty through trained multi-layer perceptrons (MLPs) with a single hidden layer and ReLU activation functions. More precisely, we give expressions for the mean and variance of the output when the input is multivariate Gaussian. In contrast to previous results, we obtain exact expressions without resort to a series expansion.
Problem

Research questions and friction points this paper is trying to address.

uncertainty propagation
multi-layer perceptrons
ReLU activation
Gaussian input
exact analytical results
Innovation

Methods, ideas, or system contributions that make the work stand out.

uncertainty propagation
multi-layer perceptron
ReLU activation
exact analytical results
Gaussian input
🔎 Similar Papers
No similar papers found.
Andrew Thompson
Andrew Thompson
National Physical Laboratory
sparse estimationdata sciencesignal processing
M
Miles McCrory
National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, UK