From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the robust recovery of low-dimensional vectors—such as sparse signals or natural images—from underdetermined linear measurements corrupted by structured noise. We propose a unified generalized projected gradient descent framework that jointly leverages sparse and deep priors. To enhance training stability of the deep prior, we introduce normalized idempotent regularization; to suppress structured noise, we design a generalized back-projection strategy. Theoretically, we establish robust convergence even under model mismatch and characterize the fundamental trade-off between identifiability and stability. Experiments demonstrate that our method significantly improves noise resilience and reconstruction accuracy in both sparse signal recovery and image reconstruction tasks, while maintaining algorithmic stability and generalization capability.

Technology Category

Application Category

📝 Abstract
We consider the problem of recovering an unknown low-dimensional vector from noisy, underdetermined observations. We focus on the Generalized Projected Gradient Descent (GPGD) framework, which unifies traditional sparse recovery methods and modern approaches using learned deep projective priors. We extend previous convergence results to robustness to model and projection errors. We use these theoretical results to explore ways to better control stability and robustness constants. To reduce recovery errors due to measurement noise, we consider generalized back-projection strategies to adapt GPGD to structured noise, such as sparse outliers. To improve the stability of GPGD, we propose a normalized idempotent regularization for the learning of deep projective priors. We provide numerical experiments in the context of sparse recovery and image inverse problems, highlighting the trade-offs between identifiability and stability that can be achieved with such methods.
Problem

Research questions and friction points this paper is trying to address.

Recovering low-dimensional vectors from noisy, underdetermined observations using GPGD.
Enhancing stability and robustness against model errors and structured noise.
Exploring trade-offs between identifiability and stability in sparse recovery and image inverse problems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized Projected Gradient Descent unifies sparse recovery and learned priors
Normalized idempotent regularization enhances stability of deep projective priors
Generalized back-projection strategies adapt to structured noise like outliers
🔎 Similar Papers
No similar papers found.