🤖 AI Summary
This work addresses the core challenge of accurately inferring high-dimensional, non-convex posterior distributions in Bayesian neural networks and deep generative models. To this end, it systematically establishes the first comprehensive methodology framework for Bayesian approximate inference tailored to deep learning. The framework unifies variational inference, Markov chain Monte Carlo (including stochastic gradient samplers), Laplace approximation, and probabilistic programming into a coherent classification taxonomy and practical paradigm—thereby bridging Bayesian computation with modern deep architectures. The proposed methods substantially improve both posterior approximation accuracy and computational efficiency. Empirically validated across diverse deep Bayesian models, they deliver a theoretically principled yet engineering-practical inference toolkit for trustworthy AI systems.
📝 Abstract
This review paper is intended for the 2nd edition of the Handbook of Markov chain Monte Carlo.We provide an introduction to approximate inference techniques as Bayesian computation methods applied to deep learning models. We organize the chapter by presenting popular computational methods for (1) Bayesian neural networks and (2) deep generative models, explaining their unique challenges in posterior inference as well as the solutions.