🤖 AI Summary
This work addresses the longstanding trade-off between perceptual quality and computational efficiency in image restoration tasks—including denoising, deblurring, and super-resolution. We propose embedding a pre-trained latent diffusion model (LDM) as a universal implicit prior into a variational optimization framework. Methodologically, we pioneer the replacement of handcrafted regularizers with the LDM prior and employ Half-Quadratic Splitting for efficient, scalable inference—thereby fully leveraging the LDM’s powerful generative prior while maintaining low computational overhead. Experiments demonstrate state-of-the-art performance across multiple restoration benchmarks, particularly excelling in perceptual metrics such as LPIPS, while preserving both reconstruction fidelity and visual realism. Our approach establishes a lightweight, task-agnostic, and extensible paradigm for generative-prior-based image restoration.
📝 Abstract
In recent years, Diffusion Models have become the new state-of-the-art in deep generative modeling, ending the long-time dominance of Generative Adversarial Networks. Inspired by the Regularization by Denoising principle, we introduce an approach that integrates a Latent Diffusion Model, trained for the denoising task, into a variational framework using Half-Quadratic Splitting, exploiting its regularization properties. This approach, under appropriate conditions that can be easily met in various imaging applications, allows for reduced computational cost while achieving high-quality results. The proposed strategy, called Regularization by Latent Denoising (RELD), is then tested on a dataset of natural images, for image denoising, deblurring, and super-resolution tasks. The numerical experiments show that RELD is competitive with other state-of-the-art methods, particularly achieving remarkable results when evaluated using perceptual quality metrics.