🤖 AI Summary
Unadjusted Hamiltonian Monte Carlo (HMC) and underdamped Langevin algorithms suffer from asymptotic bias in high-dimensional sampling due to numerical integration errors and lack automatic step-size adaptation.
Method: This paper establishes, for the first time, a quantitative relationship between Hamiltonian energy error and asymptotic bias, and proposes the first black-box, provably bounded step-size adaptation scheme that eliminates the need for Metropolis–Hastings (MH) correction.
Contribution/Results: The method rigorously controls asymptotic bias within any user-specified tolerance. Theoretical analysis—validated on Gaussian and canonical Bayesian models—confirms strong bias controllability. Empirical evaluation demonstrates several-fold speedup over MH-adjusted samplers in high dimensions, alongside markedly improved stability and overall performance. This breakthrough overcomes a key practical barrier to deploying unadjusted samplers in real-world applications.
📝 Abstract
Hamiltonian Monte Carlo and underdamped Langevin Monte Carlo are state-of-the-art methods for taking samples from high-dimensional distributions with a differentiable density function. To generate samples, they numerically integrate Hamiltonian or Langevin dynamics. This numerical integration introduces an asymptotic bias in Monte Carlo estimators of expectation values, which can be eliminated by adjusting the dynamics with a Metropolis-Hastings (MH) proposal step. Alternatively, one can trade bias for variance by avoiding MH, and select an integration step size that ensures sufficiently small asymptotic bias, relative to the variance inherent in a finite set of samples. Such unadjusted methods often significantly outperform their adjusted counterparts in high-dimensional problems where sampling would otherwise be prohibitively expensive, yet are rarely used in statistical applications due to the absence of an automated way of choosing a step size. We propose just such an automatic tuning scheme that takes a user-provided asymptotic bias tolerance and selects a step size that ensures it. The key to the method is a relationship we establish between the energy error in the integration and asymptotic bias. For Gaussians, we show that this procedure rigorously bounds the asymptotic bias. We then numerically show that the procedure works beyond Gaussians, on typical Bayesian problems. To demonstrate the practicality of the proposed scheme, we provide a comprehensive comparison of adjusted and unadjusted samplers, showing that with our tuning scheme, the unadjusted methods achieve close to optimal performance and significantly and consistently outperform their adjusted counterparts.