Robust Amortized Bayesian Inference with Self-Consistency Losses on Unlabeled Data

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural Approximate Bayesian Inference (ABI) suffers from poor robustness under model misspecification and out-of-distribution (OOD) data, and critically depends on labeled synthetic data—limiting generalization. To address this, we propose a robust neural Bayesian inference framework that for the first time rigorously translates Bayesian self-consistency into an unsupervised adaptation loss, enabling semi-supervised posterior calibration using only unlabeled real-world observations. Our method jointly integrates variational inference, self-consistency constraints, and semi-supervised learning—eliminating reliance on ground-truth parameters or simulated data. Experiments demonstrate substantial improvements in posterior estimation accuracy and calibration under OOD conditions: even when observations lie far from the training distribution, our approach maintains high fidelity and reliability. This breaks the longstanding dependency of conventional ABI on large-scale, labeled simulation data, advancing practical applicability in real-world settings.

Technology Category

Application Category

📝 Abstract
Neural amortized Bayesian inference (ABI) can solve probabilistic inverse problems orders of magnitude faster than classical methods. However, neural ABI is not yet sufficiently robust for widespread and safe applicability. In particular, when performing inference on observations outside of the scope of the simulated data seen during training, for example, because of model misspecification, the posterior approximations are likely to become highly biased. Due to the bad pre-asymptotic behavior of current neural posterior estimators in the out-of-simulation regime, the resulting estimation biases cannot be fixed in acceptable time by just simulating more training data. In this proof-of-concept paper, we propose a semi-supervised approach that enables training not only on (labeled) simulated data generated from the model, but also on unlabeled data originating from any source, including real-world data. To achieve the latter, we exploit Bayesian self-consistency properties that can be transformed into strictly proper losses without requiring knowledge of true parameter values, that is, without requiring data labels. The results of our initial experiments show remarkable improvements in the robustness of ABI on out-of-simulation data. Even if the observed data is far away from both labeled and unlabeled training data, inference remains highly accurate. If our findings also generalize to other scenarios and model classes, we believe that our new method represents a major breakthrough in neural ABI.
Problem

Research questions and friction points this paper is trying to address.

Neural Approximate Bayesian Inference
Stability
Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Inference
Self-consistent Loss
Neural Approximate Bayesian Inference (ABI)
A
Aayush Mishra
Department of Statistics, TU Dortmund University, Germany
D
Daniel Habermann
Department of Statistics, TU Dortmund University, Germany
Marvin Schmitt
Marvin Schmitt
ELLIS
Generative Neural NetworksProbabilistic MLUncertainty QuantificationSimulation Intelligence
Stefan T. Radev
Stefan T. Radev
Assistant Professor, Rensselaer Polytechnic Institute
Deep LearningBayesian StatisticsStochastic ModelsMachine LearningCognitive Modeling
P
Paul-Christian Burkner
Department of Statistics, TU Dortmund University, Germany