🤖 AI Summary
Diffusion models face significant computational overhead and excessive sampling steps in image dehazing. This paper first reveals that the semantic latent space of a pre-trained diffusion model evolves across timesteps, separately encoding haze degradation characteristics and clean content structure. Leveraging this insight, we propose a novel paradigm that requires neither fine-tuning nor iterative sampling: the diffusion model is frozen, and multi-timestep latent representations are extracted; a lightweight dehazing network then performs cross-timestep feature fusion and image reconstruction. Our approach avoids costly model retraining and lengthy sampling procedures, substantially reducing inference cost. Extensive experiments on standard benchmarks—including SOTS and D-Hazy—demonstrate state-of-the-art performance, with significant improvements in PSNR and SSIM over existing methods. The source code is publicly available.
📝 Abstract
Diffusion models have recently been investigated as powerful generative solvers for image dehazing, owing to their remarkable capability to model the data distribution. However, the massive computational burden imposed by the retraining of diffusion models, coupled with the extensive sampling steps during the inference, limit the broader application of diffusion models in image dehazing. To address these issues, we explore the properties of hazy images in the semantic latent space of frozen pre-trained diffusion models, and propose a Diffusion Latent Inspired network for Image Dehazing, dubbed DiffLI$^2$D. Specifically, we first reveal that the semantic latent space of pre-trained diffusion models can represent the content and haze characteristics of hazy images, as the diffusion time-step changes. Building upon this insight, we integrate the diffusion latent representations at different time-steps into a delicately designed dehazing network to provide instructions for image dehazing. Our DiffLI$^2$D avoids re-training diffusion models and iterative sampling process by effectively utilizing the informative representations derived from the pre-trained diffusion models, which also offers a novel perspective for introducing diffusion models to image dehazing. Extensive experiments on multiple datasets demonstrate that the proposed method achieves superior performance to existing image dehazing methods. Code is available at https://github.com/aaaasan111/difflid.