🤖 AI Summary
For nonsmooth weakly convex optimization, existing methods struggle to escape strict saddle points—hindering convergence to local minima.
Method: This paper proposes a family of perturbed proximal algorithms—including perturbed proximal point, proximal gradient, and proximal linear variants—to address this challenge.
Contributions/Results: We establish, for the first time, a verifiable characterization of ε-approximate local minima for nonsmooth weakly convex functions and provide the first theoretical guarantee for escaping strict saddle points. By integrating perturbed optimization, nonsmooth analysis, and saddle-point escape theory, all three algorithms achieve an iteration complexity of O(ε⁻² log d) under standard assumptions to compute an ε-approximate local minimum. This yields the first polynomial-time convergence guarantee for escaping saddle points in nonsmooth optimization.
📝 Abstract
We propose perturbed proximal algorithms that can provably escape strict saddles for nonsmooth weakly convex functions. The main results are based on a novel characterization of -approximate local minimum for nonsmooth functions, and recent developments on perturbed gradient methods for escaping saddle points for smooth problems. Specifically, we show that under standard assumptions, the perturbed proximal point, perturbed proximal gradient and perturbed proximal linear algorithms find -approximate local minimum for nonsmooth weakly convex functions in O( −2 log(d)) iterations, where d is the dimension of the problem. Keywords— Nonsmooth Optimization, Saddle Point, Perturbed Proximal Algorithms