🤖 AI Summary
Generative models as priors in imaging inverse problems suffer from architectural constraints, mode collapse, and data bias, limiting their ability to represent out-of-distribution or rare-structure images. This paper systematically proposes invertible neural networks (INNs) as natural signal priors with zero representation error, solving empirical risk minimization via maximum-likelihood regularization or optimization-based initialization. The framework achieves robust reconstruction across denoising, compressive sensing (CS), and inpainting tasks. We theoretically establish that, under linear invertibility, the expected reconstruction error is bounded. Experiments demonstrate: (i) superior CS accuracy over sparse priors across the full sampling-ratio spectrum; (ii) significantly improved reconstruction of out-of-distribution and rare-structure images compared to GAN-based methods; and (iii) the first empirical validation that INNs—as learning-free priors—consistently outperform conventional approaches in undersampled reconstruction.
📝 Abstract
Trained generative models have shown remarkable performance as priors for inverse problems in imaging -- for example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Given a trained generative model, we study the empirical risk formulation of the desired inverse problem under a regularization that promotes high likelihood images, either directly by penalization or algorithmically by initialization. For compressive sensing, invertible priors can yield higher accuracy than sparsity priors across almost all undersampling ratios, and due to their lack of representation error, invertible priors can yield better reconstructions than GAN priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images. We additionally compare performance for compressive sensing to unlearned methods, such as the deep decoder, and we establish theoretical bounds on expected recovery error in the case of a linear invertible model.