🤖 AI Summary
This work addresses the degradation of the global $W_2$ error of unadjusted Langevin algorithms with dimension $d$ (or $sqrt{d}$) in high dimensions. It identifies and formalizes the “bias delocalization” phenomenon: although global convergence slows with dimension, any $K$-dimensional marginal distribution achieves target $W_2$ accuracy within $O(K,mathrm{polylog},d)$ iterations. To capture this, the authors introduce the novel $W_{2,ell^infty}$ metric, which decouples marginal analysis from global coupling inherent in standard $W_2$. Leveraging asymptotic analysis and strong log-concavity theory, they rigorously establish bias delocalization for Gaussian and sparse strongly log-concave targets, and construct a counterexample demonstrating its general failure beyond these settings. Finally, they derive a tight upper bound on the local convergence rate for $K$-marginals—yielding the first fine-grained, dimension-adaptive local convergence guarantee for high-dimensional sampling.
📝 Abstract
The unadjusted Langevin algorithm is commonly used to sample probability distributions in extremely high-dimensional settings. However, existing analyses of the algorithm for strongly log-concave distributions suggest that, as the dimension $d$ of the problem increases, the number of iterations required to ensure convergence within a desired error in the $W_2$ metric scales in proportion to $d$ or $sqrt{d}$. In this paper, we argue that, despite this poor scaling of the $W_2$ error for the full set of variables, the behavior for a small number of variables can be significantly better: a number of iterations proportional to $K$, up to logarithmic terms in $d$, often suffices for the algorithm to converge to within a desired $W_2$ error for all $K$-marginals. We refer to this effect as delocalization of bias. We show that the delocalization effect does not hold universally and prove its validity for Gaussian distributions and strongly log-concave distributions with certain sparse interactions. Our analysis relies on a novel $W_{2,ell^infty}$ metric to measure convergence. A key technical challenge we address is the lack of a one-step contraction property in this metric. Finally, we use asymptotic arguments to explore potential generalizations of the delocalization effect beyond the Gaussian and sparse interactions setting.