🤖 AI Summary
GAN priors suffer from significant representation errors in inverse problems (e.g., compressed sensing, super-resolution) due to distribution mismatch between the generated and true data distributions—degrading performance on both in-distribution and out-of-distribution images. To address this, we propose a training-free hybrid modeling framework that linearly couples a fixed pre-trained GAN prior with a parameter-free, under-parameterized untrained deep decoder, jointly optimizing their combination weights in an unsupervised setting. This is the first work to synergistically integrate GAN priors and untrained decoders without task-specific GAN fine-tuning, achieving strong generalizability and scalability. Experiments on compressed sensing and super-resolution demonstrate that our method achieves substantially higher PSNR than either component used independently, while maintaining robust performance across both in-distribution and out-of-distribution test images.
📝 Abstract
Generative models, such as GANs, learn an explicit low-dimensional representation of a particular class of images, and so they may be used as natural image priors for solving inverse problems such as image restoration and compressive sensing. GAN priors have demonstrated impressive performance on these tasks, but they can exhibit substantial representation error for both in-distribution and out-of-distribution images, because of the mismatch between the learned, approximate image distribution and the data generating distribution. In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder. The deep decoder is an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior. No knowledge of the specific inverse problem is needed in the training of the GAN underlying our method. For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately, both on in-distribution and out-of-distribution images. This model provides a method for extensibly and cheaply leveraging both the benefits of learned and unlearned image recovery priors in inverse problems.