Global Convergence of Iteratively Reweighted Least Squares for Robust Subspace Recovery

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of global convergence theory for Iteratively Reweighted Least Squares (IRLS) in robust subspace and affine subspace recovery. We propose a novel IRLS algorithm incorporating dynamic smoothing regularization—the first such method to guarantee global linear convergence from arbitrary initializations under nonconvex, Riemannian manifold constraints. The theoretical analysis rigorously establishes global convergence for both linear and affine subspace recovery settings, thereby filling a fundamental gap in the convergence analysis of IRLS for subspace optimization. Empirical validation on low-dimensional neural network training tasks demonstrates superior convergence speed and robustness compared to standard approaches. The key innovation lies in the intrinsic coupling of smoothing regularization with the IRLS iteration, overcoming classical local convergence limitations. To the best of our knowledge, this is the first globally convergent IRLS variant for nonconvex optimization on Riemannian manifolds, providing the first rigorous global convergence guarantee for IRLS in subspace learning.

Technology Category

Application Category

📝 Abstract
Robust subspace estimation is fundamental to many machine learning and data analysis tasks. Iteratively Reweighted Least Squares (IRLS) is an elegant and empirically effective approach to this problem, yet its theoretical properties remain poorly understood. This paper establishes that, under deterministic conditions, a variant of IRLS with dynamic smoothing regularization converges linearly to the underlying subspace from any initialization. We extend these guarantees to affine subspace estimation, a setting that lacks prior recovery theory. Additionally, we illustrate the practical benefits of IRLS through an application to low-dimensional neural network training. Our results provide the first global convergence guarantees for IRLS in robust subspace recovery and, more broadly, for nonconvex IRLS on a Riemannian manifold.
Problem

Research questions and friction points this paper is trying to address.

Global convergence of IRLS for robust subspace recovery
Theoretical guarantees for affine subspace estimation
Application of IRLS in low-dimensional neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic smoothing regularization in IRLS
Linear convergence to true subspace
Global guarantees on Riemannian manifold
🔎 Similar Papers
No similar papers found.
Gilad Lerman
Gilad Lerman
University of Minnesota
K
Kang Li
University of Central Florida
Tyler Maunu
Tyler Maunu
Brandeis University
T
Teng Zhang
University of Central Florida