🤖 AI Summary
This paper addresses the robust recovery of low-dimensional vectors—such as sparse signals or natural images—from underdetermined linear measurements corrupted by structured noise. We propose a unified generalized projected gradient descent framework that jointly leverages sparse and deep priors. To enhance training stability of the deep prior, we introduce normalized idempotent regularization; to suppress structured noise, we design a generalized back-projection strategy. Theoretically, we establish robust convergence even under model mismatch and characterize the fundamental trade-off between identifiability and stability. Experiments demonstrate that our method significantly improves noise resilience and reconstruction accuracy in both sparse signal recovery and image reconstruction tasks, while maintaining algorithmic stability and generalization capability.
📝 Abstract
We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.