🤖 AI Summary
Classical Metropolis-Adjusted Langevin Algorithm (MALA) fails for high-dimensional distributions whose log-density satisfies only a local Lipschitz condition—i.e., is nonsmooth (non-differentiable) and nonconvex—due to its reliance on exact gradients. Method: We propose the subdifferential MALA (sMALA), the first adaptation of MALA incorporating subdifferentials: it replaces gradients with stochastic subgradient estimates, integrates Metropolis–Hastings correction, and ensures validity under nonsmoothness. Contribution/Results: We establish theoretical ergodicity guarantees for sMALA under nonconvex, nonsmooth conditions. Empirically, sMALA retains computational efficiency comparable to MALA while significantly improving sampling stability and accuracy for challenging nonsmooth targets—such as those with sharp cusps or piecewise-linear structures—thereby substantially extending the applicability of Langevin-based MCMC methods.
📝 Abstract
The Metropolis-Adjusted Langevin Algorithm (MALA) is a widely used Markov Chain Monte Carlo (MCMC) method for sampling from high-dimensional distributions. However, MALA relies on differentiability assumptions that restrict its applicability. In this paper, we introduce the Metropolis-Adjusted Subdifferential Langevin Algorithm (MASLA), a generalization of MALA that extends its applicability to distributions whose log-densities are locally Lipschitz, generally non-differentiable, and non-convex. We evaluate the performance of MASLA by comparing it with other sampling algorithms in settings where they are applicable. Our results demonstrate the effectiveness of MASLA in handling a broader class of distributions while maintaining computational efficiency.