Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping

๐Ÿ“… 2024-12-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
We study nonconvex stochastic optimization under heavy-tailed noiseโ€”where gradients possess only finite $p$-th moments for $1 < p leq 2$. Departing from the standard finite-variance assumption and eliminating reliance on gradient clipping, we propose Batched NSGDM: a clipping-free, $p$-agnostic algorithm. Our method integrates momentum normalization, batched sampling, and adaptive step sizes, underpinned by heavy-tailed probability inequalities and stability analysis of the iterate sequence. Theoretically, when $p$ is known, Batched NSGDM achieves the optimal convergence rate $Oig(T^{(1-p)/(3p-2)}ig)$; when $p$ is unknown, it attains $Oig(T^{(1-p)/(2p)}ig)$โ€”the first provably optimal, clipping-free rate in this setting. Our results demonstrate that gradient clipping is not necessary for robust optimization under heavy-tailed noise, thereby substantially broadening both the theoretical foundations and practical applicability of nonconvex stochastic optimization.

Technology Category

Application Category

๐Ÿ“ Abstract
Recently, the study of heavy-tailed noises in first-order nonconvex stochastic optimization has gotten a lot of attention since it was recognized as a more realistic condition as suggested by many empirical observations. Specifically, the stochastic noise (the difference between the stochastic and true gradient) is considered only to have a finite $mathfrak{p}$-th moment where $mathfrak{p}inleft(1,2 ight]$ instead of assuming it always satisfies the classical finite variance assumption. To deal with this more challenging setting, people have proposed different algorithms and proved them to converge at an optimal $mathcal{O}(T^{frac{1-mathfrak{p}}{3mathfrak{p}-2}})$ rate for smooth objectives after $T$ iterations. Notably, all these new-designed algorithms are based on the same technique - gradient clipping. Naturally, one may want to know whether the clipping method is a necessary ingredient and the only way to guarantee convergence under heavy-tailed noises. In this work, by revisiting the existing Batched Normalized Stochastic Gradient Descent with Momentum (Batched NSGDM) algorithm, we provide the first convergence result under heavy-tailed noises but without gradient clipping. Concretely, we prove that Batched NSGDM can achieve the optimal $mathcal{O}(T^{frac{1-mathfrak{p}}{3mathfrak{p}-2}})$ rate even under the relaxed smooth condition. More interestingly, we also establish the first $mathcal{O}(T^{frac{1-mathfrak{p}}{2mathfrak{p}}})$ convergence rate in the case where the tail index $mathfrak{p}$ is unknown in advance, which is arguably the common scenario in practice.
Problem

Research questions and friction points this paper is trying to address.

Non-convex Optimization
Heavy-tailed Noise
Gradient-free Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Batched NSGDM
Heavy-tailed Noise
Optimal Convergence Rate
๐Ÿ”Ž Similar Papers
No similar papers found.