🤖 AI Summary
This work addresses the lack of finite-time performance guarantees for existing stochastic approximation algorithms under heavy-tailed and long-range dependent noise. Focusing on the problem of finding roots of strongly monotone operators, the paper proposes a convergence analysis framework based on a noise-averaging mechanism. Without altering the iterative update structure, it establishes, for the first time, finite-time moment bounds and explicit convergence rates under such challenging noise conditions. The approach integrates strong monotonicity theory with finite-time moment analysis, making it applicable to settings such as stochastic gradient descent (SGD) and gradient-based games. Numerical experiments corroborate the theoretical findings and effectively quantify the impact of complex noise structures on algorithmic performance.
📝 Abstract
Stochastic approximation (SA) is a fundamental iterative framework with broad applications in reinforcement learning and optimization. Classical analyses typically rely on martingale difference or Markov noise with bounded second moments, but many practical settings, including finance and communications, frequently encounter heavy-tailed and long-range dependent (LRD) noise. In this work, we study SA for finding the root of a strongly monotone operator under these non-classical noise models. We establish the first finite-time moment bounds in both settings, providing explicit convergence rates that quantify the impact of heavy tails and temporal dependence. Our analysis employs a noise-averaging argument that regularizes the impact of noise without modifying the iteration. Finally, we apply our general framework to stochastic gradient descent (SGD) and gradient play, and corroborate our finite-time analysis through numerical experiments.