🤖 AI Summary
This paper addresses robust matrix completion (RMC) under sparse noise corruption. To overcome limitations of existing nonconvex approaches—which rely on explicit regularization or sample splitting—we propose a novel nonconvex alternating optimization algorithm that requires neither regularization nor sample splitting. Our method alternates between projection-gradient updates for low-rank estimation and generalized thresholding (e.g., soft-thresholding, SCAD) for sparse noise removal. We introduce the first leave-one-out analysis for nonconvex RMC algorithms, establishing linear convergence guarantees and significantly improving the sampling complexity bound over singular value projection-based methods. The theoretical framework accommodates a broad class of thresholding functions, and extensive experiments validate superior convergence speed and recovery accuracy. The approach thus achieves both theoretical rigor—via provable linear convergence and refined sampling bounds—and practical efficiency—through simple, scalable iterations without tuning regularization parameters.
📝 Abstract
We study the problem of robust matrix completion (RMC), where the partially observed entries of an underlying low-rank matrix is corrupted by sparse noise. Existing analysis of the non-convex methods for this problem either requires the explicit but empirically redundant regularization in the algorithm or requires sample splitting in the analysis. In this paper, we consider a simple yet efficient nonconvex method which alternates between a projected gradient step for the low-rank part and a thresholding step for the sparse noise part. Inspired by leave-one out analysis for low rank matrix completion, it is established that the method can achieve linear convergence for a general class of thresholding functions, including for example soft-thresholding and SCAD. To the best of our knowledge, this is the first leave-one-out analysis on a nonconvex method for RMC. Additionally, when applying our result to low rank matrix completion, it improves the sampling complexity of existing result for the singular value projection method.