🤖 AI Summary
This work addresses the lack of a unified framework for analyzing the convergence of finite-sum optimization algorithms such as SAG, SAGA, and IAG, whose original proofs are often intricate and disparate. Focusing on smooth strongly convex problems, we propose the first unified, concise, and modular convergence analysis that simultaneously applies to these three algorithmic classes. Our approach leverages a novel Lyapunov function to capture the effect of stochastic subsampling delays and combines it with elementary concentration inequalities to derive high-probability bounds. The framework naturally extends to non-convex settings and Markovian sampling scenarios. As concrete applications, we establish the first high-probability convergence guarantees for SAG and SAGA and significantly improve the known convergence rate for IAG.
📝 Abstract
Stochastic variance-reduced algorithms such as Stochastic Average Gradient (SAG) and SAGA, and their deterministic counterparts like the Incremental Aggregated Gradient (IAG) method, have been extensively studied in large-scale machine learning. Despite their popularity, existing analyses for these algorithms are disparate, relying on different proof techniques tailored to each method. Furthermore, the original proof of SAG is known to be notoriously involved, requiring computer-aided analysis. Focusing on finite-sum optimization with smooth and strongly convex objective functions, our main contribution is to develop a single unified convergence analysis that applies to all three algorithms: SAG, SAGA, and IAG. Our analysis features two key steps: (i) establishing a bound on delays due to stochastic sub-sampling using simple concentration tools, and (ii) carefully designing a novel Lyapunov function that accounts for such delays. The resulting proof is short and modular, providing the first high-probability bounds for SAG and SAGA that can be seamlessly extended to non-convex objectives and Markov sampling. As an immediate byproduct of our new analysis technique, we obtain the best known rates for the IAG algorithm, significantly improving upon prior bounds.