🤖 AI Summary
This work establishes non-asymptotic total variation convergence bounds for the Kinetic Langevin Monte Carlo (KLMC) algorithm sampling from high-dimensional target distributions. Under the assumptions that the target measure satisfies a Poincaré inequality and the potential gradient is Lipschitz continuous, we derive the first dimensionally explicit convergence rate of order $O(sqrt{d})$, substantially improving upon Dalalyan’s $O(d)$ bound for the (non-kinetic) Langevin Monte Carlo algorithm (2017). Methodologically, our analysis integrates probabilistic coupling techniques, refined exploitation of the Poincaré inequality, and tight control of discretization errors in stochastic differential equation (SDE) approximation. Crucially, this is the first result to rigorously quantify how the kinetic mechanism—i.e., momentum incorporation—alleviates the curse of dimensionality. Our results demonstrate that momentum reduces the convergence complexity from linear to square-root dependence on dimension, providing foundational theoretical justification for momentum-based samplers in high-dimensional settings.
📝 Abstract
We prove non asymptotic total variation estimates for the kinetic Langevin algorithm in high dimension when the target measure satisfies a Poincar'e inequality and has gradient Lipschitz potential. The main point is that the estimate improves significantly upon the corresponding bound for the non kinetic version of the algorithm, due to Dalalyan. In particular the dimension dependence drops from $O(n)$ to $O(sqrt n)$.