🤖 AI Summary
Robust low-rank matrix completion (LRMC) in large-scale data suffering from simultaneous missing entries and extreme outliers remains challenging. Method: This paper proposes a learnable nonconvex deep-unfolding framework featuring a novel feedforward-recurrent hybrid neural architecture that emulates infinite-step iterative optimization, integrated with nonconvex regularization and an algorithm design provably ensuring linear convergence—thereby balancing expressive modeling capacity and theoretical guarantees. Contribution/Results: The framework jointly handles missing data and severe noise corruption, achieving state-of-the-art performance on diverse real-world tasks—including video background modeling, ultrasound imaging, face modeling, and satellite cloud removal—demonstrating superior robustness and generalization under complex, realistic conditions.
📝 Abstract
Robust matrix completion (RMC) is a widely used machine learning tool that simultaneously tackles two critical issues in low-rank data analysis: missing data entries and extreme outliers. This paper proposes a novel scalable and learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC), for large-scale RMC problems. LRMC enjoys low computational complexity with linear convergence. Motivated by the proposed theorem, the free parameters of LRMC can be effectively learned via deep unfolding to achieve optimum performance. Furthermore, this paper proposes a flexible feedforward-recurrent-mixed neural network framework that extends deep unfolding from fix-number iterations to infinite iterations. The superior empirical performance of LRMC is verified with extensive experiments against state-of-the-art on synthetic datasets and real applications, including video background subtraction, ultrasound imaging, face modeling, and cloud removal from satellite imagery.