🤖 AI Summary
This work addresses the lack of global convergence theory for Iteratively Reweighted Least Squares (IRLS) in robust subspace and affine subspace recovery. We propose a novel IRLS algorithm incorporating dynamic smoothing regularization—the first such method to guarantee global linear convergence from arbitrary initializations under nonconvex, Riemannian manifold constraints. The theoretical analysis rigorously establishes global convergence for both linear and affine subspace recovery settings, thereby filling a fundamental gap in the convergence analysis of IRLS for subspace optimization. Empirical validation on low-dimensional neural network training tasks demonstrates superior convergence speed and robustness compared to standard approaches. The key innovation lies in the intrinsic coupling of smoothing regularization with the IRLS iteration, overcoming classical local convergence limitations. To the best of our knowledge, this is the first globally convergent IRLS variant for nonconvex optimization on Riemannian manifolds, providing the first rigorous global convergence guarantee for IRLS in subspace learning.
📝 Abstract
Robust subspace estimation is fundamental to many machine learning and data analysis tasks. Iteratively Reweighted Least Squares (IRLS) is an elegant and empirically effective approach to this problem, yet its theoretical properties remain poorly understood. This paper establishes that, under deterministic conditions, a variant of IRLS with dynamic smoothing regularization converges linearly to the underlying subspace from any initialization. We extend these guarantees to affine subspace estimation, a setting that lacks prior recovery theory. Additionally, we illustrate the practical benefits of IRLS through an application to low-dimensional neural network training. Our results provide the first global convergence guarantees for IRLS in robust subspace recovery and, more broadly, for nonconvex IRLS on a Riemannian manifold.