Harnessing the Power of Reinforcement Learning for Adaptive MCMC

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual tuning of Markov Chain Monte Carlo (MCMC) samplers for complex probabilistic models is labor-intensive and yields poor adaptability. To address this, we propose a reinforcement learning (RL)-based adaptive MCMC framework. We formulate the Metropolis–Hastings algorithm as a Markov decision process and design an adaptive gradient-based proposal kernel that balances learnability and flexibility. Crucially, we introduce a contrastive-divergence-driven reward function—overcoming the sparsity and non-stationarity issues inherent in conventional metrics (e.g., acceptance rate) during RL training. Policy gradient methods are employed to optimize the sampling policy end-to-end. Experiments on the posteriordb benchmark demonstrate that our approach significantly improves convergence speed and effective sample size (ESS), outperforming both classical adaptive MCMC methods and manually tuned samplers.

Technology Category

Application Category

📝 Abstract
Sampling algorithms drive probabilistic machine learning, and recent years have seen an explosion in the diversity of tools for this task. However, the increasing sophistication of sampling algorithms is correlated with an increase in the tuning burden. There is now a greater need than ever to treat the tuning of samplers as a learning task in its own right. In a conceptual breakthrough, Wang et al (2025) formulated Metropolis-Hastings as a Markov decision process, opening up the possibility for adaptive tuning using Reinforcement Learning (RL). Their emphasis was on theoretical foundations; realising the practical benefit of Reinforcement Learning Metropolis-Hastings (RLMH) was left for subsequent work. The purpose of this paper is twofold: First, we observe the surprising result that natural choices of reward, such as the acceptance rate, or the expected squared jump distance, provide insufficient signal for training RLMH. Instead, we propose a novel reward based on the contrastive divergence, whose superior performance in the context of RLMH is demonstrated. Second, we explore the potential of RLMH and present adaptive gradient-based samplers that balance flexibility of the Markov transition kernel with learnability of the associated RL task. A comprehensive simulation study using the posteriordb benchmark supports the practical effectiveness of RLMH.
Problem

Research questions and friction points this paper is trying to address.

Reducing tuning burden in adaptive MCMC sampling algorithms
Improving reward signals for Reinforcement Learning Metropolis-Hastings
Balancing flexibility and learnability in gradient-based samplers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning for adaptive MCMC tuning
Novel reward based on contrastive divergence
Adaptive gradient-based samplers with flexible kernels
🔎 Similar Papers
No similar papers found.
C
Congye Wang
Newcastle University, UK
M
Matthew A. Fisher
Newcastle University, UK
Heishiro Kanagawa
Heishiro Kanagawa
Newcastle University
Machine LearningKernel MethodsGenerative Modelling
W
Wilson Chen
University of Sydney, Australia
Chris. J. Oates
Chris. J. Oates
Newcastle University
Statistics