🤖 AI Summary
Efficient and accurate recovery of high-dimensional sparse signals from quadratic measurements (e.g., phase retrieval) remains challenging due to inherent nonconvexity and ill-posedness.
Method: This paper introduces algebraic geometry tools—novel in sparse quadratic system analysis—to establish rigorous reconstructability guarantees. We propose a two-stage sparse Gauss–Newton algorithm: Stage I employs support-restricted spectral initialization requiring only $O(s^2 log n)$ measurements; Stage II applies iterative hard-thresholding Gauss–Newton, achieving near-optimal sampling complexity without resampling.
Contribution/Results: Under Gaussian measurements, the algorithm attains linear convergence and high-precision recovery. Experiments demonstrate superior performance at high sparsity: higher success probability with fewer measurements, only 1/10 the iterations of state-of-the-art methods, and significantly reduced relative error—achieving an optimal balance between computational efficiency and reconstruction robustness.
📝 Abstract
In signal processing and data recovery, reconstructing a signal from quadratic measurements poses a significant challenge, particularly in high-dimensional settings where measurements $m$ is far less than the signal dimension $n$ (i.e., $m ll n$). This paper addresses this problem by exploiting signal sparsity. Using tools from algebraic geometry, we derive theoretical recovery guarantees for sparse quadratic systems, showing that $mge 2s$ (real case) and $mge 4s-2$ (complex case) generic measurements suffice to uniquely recover all $s$-sparse signals. Under a Gaussian measurement model, we propose a novel two-stage Sparse Gauss-Newton (SGN) algorithm. The first stage employs a support-restricted spectral initialization, yielding an accurate initial estimate with $m=O(s^2log{n})$ measurements. The second stage refines this estimate via an iterative hard-thresholding Gauss-Newton method, achieving quadratic convergence to the true signal within finitely many iterations when $mge O(slog{n})$. Compared to existing second-order methods, our algorithm achieves near-optimal sampling complexity for the refinement stage without requiring resampling. Numerical experiments indicate that SGN significantly outperforms state-of-the-art algorithms in both accuracy and computational efficiency. In particular, (1) when sparsity level $s$ is high, compared with existing algorithms, SGN can achieve the same success rate with fewer measurements. (2) SGN converges with only about $1/10$ iterations of the best existing algorithm and reach lower relative error.