Quantifying Uncertainty in the Presence of Distribution Shifts

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural networks often yield miscalibrated uncertainty estimates under covariate shift, undermining their reliability in open-world settings. To address this, we propose a Bayesian adaptive prior framework that explicitly models the prior distribution as a function of both training and test covariates—thereby automatically inflating predictive uncertainty in regions distant from the training distribution. We employ amortized variational inference for efficient posterior approximation and leverage bootstrap-based few-shot simulation of diverse shift scenarios to enhance robustness. Experiments on synthetic and real-world benchmarks demonstrate substantial improvements in uncertainty calibration and out-of-distribution detection, particularly under severe covariate shift—outperforming state-of-the-art Bayesian and non-Bayesian baselines. Our approach establishes a scalable, interpretable paradigm for uncertainty modeling in trustworthy machine learning.

Technology Category

Application Category

📝 Abstract
Neural networks make accurate predictions but often fail to provide reliable uncertainty estimates, especially under covariate distribution shifts between training and testing. To address this problem, we propose a Bayesian framework for uncertainty estimation that explicitly accounts for covariate shifts. While conventional approaches rely on fixed priors, the key idea of our method is an adaptive prior, conditioned on both training and new covariates. This prior naturally increases uncertainty for inputs that lie far from the training distribution in regions where predictive performance is likely to degrade. To efficiently approximate the resulting posterior predictive distribution, we employ amortized variational inference. Finally, we construct synthetic environments by drawing small bootstrap samples from the training data, simulating a range of plausible covariate shift using only the original dataset. We evaluate our method on both synthetic and real-world data. It yields substantially improved uncertainty estimates under distribution shifts.
Problem

Research questions and friction points this paper is trying to address.

Neural networks lack reliable uncertainty estimates under distribution shifts
Proposing Bayesian framework for uncertainty estimation with adaptive priors
Improving uncertainty estimates in synthetic and real-world data scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian framework for uncertainty estimation
Adaptive prior conditioned on covariates
Amortized variational inference approximation
🔎 Similar Papers
No similar papers found.
Yuli Slavutsky
Yuli Slavutsky
Postdoctoral Research Scientist, Columbia University
Machine LearningStatistics
D
David M. Blei
Departments of Statistics, Computer Science, Columbia University, New York, NY 10027, USA