🤖 AI Summary
This study investigates the convergence behavior of the Metropolis–Hastings algorithm under non-geometrically ergodic settings, with particular emphasis on its relationship to the tail properties of the target distribution. Leveraging Markov chain ergodicity theory, probabilistic limit analysis, and carefully constructed counterexamples, the work provides a rigorous asymptotic analysis of both random-walk and guided-walk Metropolis algorithms when targeting distributions with polynomial tails or strictly convex potentials. The main contributions include establishing necessary and sufficient conditions for non-geometric ergodicity, proving that the guided-walk variant achieves a convergence rate up to twice as fast as the random-walk version for polynomial-tailed targets, and demonstrating that both algorithms exhibit comparable ballistic movement speeds in the large-state regime under strictly convex potentials.
📝 Abstract
We prove a general result that if a Metropolis--Hastings algorithm has a proposal that is not geometrically ergodic and the acceptance rate approaches unity at a suitable rate as the state variable becomes large, then the Metropolised chain will also not be geometrically ergodic. Our conditions seem stronger than might be expected, but are shown to be necessary through a counterexample. We then turn our attention to the random walk and guided walk Metropolis algorithms. We show that if the target distribution has polynomial tails the latter converges at twice the polynomial rate of the former, but that if instead the target distribution has strictly convex potential then the random walk Metropolis behaves as a $1/2$-lazy version of the guided walk Metropolis when the state variable is large, and therefore moves at a similar (ballistic) speed.