🤖 AI Summary
This work addresses uncertainty quantification in probabilistic machine learning by jointly modeling epistemic (model) and aleatoric (data noise) uncertainties to enhance prediction reliability and interpretability. We propose a unified theoretical framework that systematically separates and jointly estimates both uncertainty types via Monte Carlo sampling. Methodologically, we integrate a Gaussian process latent variable model with scalable approximations based on random Fourier features, significantly reducing computational complexity while preserving the fidelity of predictive distributions. Experiments across diverse benchmark tasks demonstrate that our approach robustly disentangles uncertainty sources and achieves precise confidence calibration: it reduces expected calibration error by 23.6% on average compared to state-of-the-art methods. This yields more trustworthy probabilistic predictions, particularly critical for high-stakes decision-making scenarios.
📝 Abstract
Uncertainty Quantification (UQ) is essential in probabilistic machine learning models, particularly for assessing the reliability of predictions. In this paper, we present a systematic framework for estimating both epistemic and aleatoric uncertainty in probabilistic models. We focus on Gaussian Process Latent Variable Models and employ scalable Random Fourier Features-based Gaussian Processes to approximate predictive distributions efficiently. We derive a theoretical formulation for UQ, propose a Monte Carlo sampling-based estimation method, and conduct experiments to evaluate the impact of uncertainty estimation. Our results provide insights into the sources of predictive uncertainty and illustrate the effectiveness of our approach in quantifying the confidence in the predictions.