🤖 AI Summary
This work addresses the slow mixing and complex tuning inherent in random-walk behavior of Markov chain Monte Carlo (MCMC). We propose a continuous-time, rejection-free, strictly irreversible jump sampler. Methodologically, we introduce the first general construction framework based on skew-detailed balance for rebalancing, wherein a reference process is transformed via measure change to target the desired distribution, and integrate leapfrog discretization of Hamiltonian dynamics to enable efficient jumps. Theoretically, we establish strict geometric ergodicity and zero rejection probability; further, we derive a non-reversible variant of Hamiltonian Monte Carlo (HMC) with provable geometric convergence under convex potentials. Empirically, the sampler exhibits directional exploration, significantly accelerating mixing and outperforming standard HMC.
📝 Abstract
Markov chain sampling methods form the backbone of modern computational statistics. However, many popular methods are prone to random walk behavior, i.e., diffusion-like exploration of the sample space, leading to slow mixing that requires intricate tuning to alleviate. Non-reversible samplers can resolve some of these issues. We introduce a device that turns jump processes that satisfy a skew-detailed balance condition for a reference measure into a process that samples a target measure that is absolutely continuous with respect to the reference measure. The resulting sampler is rejection-free, non-reversible, and continuous-time. As an example, we apply the device to Hamiltonian dynamics discretized by the leapfrog integrator, resulting in a rejection-free non-reversible continuous-time version of Hamiltonian Monte Carlo (HMC). We prove the geometric ergodicity of the resulting sampler under certain convexity conditions, and demonstrate its qualitatively different behavior to HMC through numerical examples.