🤖 AI Summary
Existing pre-trained diffusion-based image dehazing methods improve perceptual quality but often introduce content hallucinations, compromising fidelity. To address this, we propose an internal-prior-guided diffusion dehazing framework. Our method jointly leverages intrinsic image structures and haze-specific external knowledge to constrain the generative process. Specifically, it comprises: (1) a structure-guided latent-space restorer that enforces internal structural priors during denoising; and (2) a haze-aware self-calibrating decoder that integrates haze-distribution alignment with adaptive regional attention, enabling selective, synergistic guidance from both external and internal priors. Evaluated on real-world datasets, our approach significantly suppresses color casts and artifacts while preserving structural fidelity and enhancing visual quality. Quantitative metrics (e.g., PSNR, SSIM, LPIPS) and qualitative assessments consistently demonstrate superiority over state-of-the-art methods.
📝 Abstract
Recent approaches using large-scale pretrained diffusion models for image dehazing improve perceptual quality but often suffer from hallucination issues, producing unfaithful dehazed image to the original one. To mitigate this, we propose ProDehaze, a framework that employs internal image priors to direct external priors encoded in pretrained models. We introduce two types of extit{selective} internal priors that prompt the model to concentrate on critical image areas: a Structure-Prompted Restorer in the latent space that emphasizes structure-rich regions, and a Haze-Aware Self-Correcting Refiner in the decoding process to align distributions between clearer input regions and the output. Extensive experiments on real-world datasets demonstrate that ProDehaze achieves high-fidelity results in image dehazing, particularly in reducing color shifts. Our code is at https://github.com/TianwenZhou/ProDehaze.