🤖 AI Summary
To address the computational bottleneck in Bayesian inference arising from repeated evaluations of expensive black-box models, this paper introduces “post-hoc Bayesian inference”: a new paradigm that constructs posterior approximations solely from existing model evaluations—such as those collected along a maximum a posteriori (MAP) optimization trajectory—without any additional model queries. The core method, variational sparse Bayesian quadrature (VSBQ), is the first to unify sparse Gaussian process modeling, Bayesian quadrature, and variational inference, enabling efficient posterior estimation under noisy or black-box likelihoods. Evaluated on synthetic benchmarks and a real-world computational neuroscience application, VSBQ achieves posterior accuracy comparable to standard MCMC or variational inference (VI) methods, while accelerating inference by one to two orders of magnitude—all using only pre-existing optimization trajectory data. This significantly reduces the computational cost of Bayesian inference without sacrificing fidelity.
📝 Abstract
In applied Bayesian inference scenarios, users may have access to a large number of pre-existing model evaluations, for example from maximum-a-posteriori (MAP) optimization runs. However, traditional approximate inference techniques make little to no use of this available information. We propose the framework of post-process Bayesian inference as a means to obtain a quick posterior approximation from existing target density evaluations, with no further model calls. Within this framework, we introduce Variational Sparse Bayesian Quadrature (VSBQ), a method for post-process approximate inference for models with black-box and potentially noisy likelihoods. VSBQ reuses existing target density evaluations to build a sparse Gaussian process (GP) surrogate model of the log posterior density function. Subsequently, we leverage sparse-GP Bayesian quadrature combined with variational inference to achieve fast approximate posterior inference over the surrogate. We validate our method on challenging synthetic scenarios and real-world applications from computational neuroscience. The experiments show that VSBQ builds high-quality posterior approximations by post-processing existing optimization traces, with no further model evaluations.