🤖 AI Summary
This study addresses the precise quantification of uncertainty propagation in single-hidden-layer ReLU multilayer perceptrons (MLPs) with multivariate Gaussian inputs. By leveraging the piecewise linear nature of the ReLU activation and the analytical tractability of Gaussian distributions, the authors derive, for the first time, closed-form expressions for the output mean and variance—without resorting to series expansions or approximation schemes. This work overcomes the reliance on approximations inherent in existing approaches, thereby establishing a rigorous theoretical foundation and providing an efficient computational framework for uncertainty quantification in neural networks.
📝 Abstract
We give analytical results for propagation of uncertainty through trained multi-layer perceptrons (MLPs) with a single hidden layer and ReLU activation functions. More precisely, we give expressions for the mean and variance of the output when the input is multivariate Gaussian. In contrast to previous results, we obtain exact expressions without resort to a series expansion.