🤖 AI Summary
Current RAW-to-sRGB conversion methods suffer from structural detail loss and chromatic distortion. To address these issues, this paper proposes a grayscale–chrominance decoupling framework: structural and textural details are reconstructed in the grayscale domain, while precise colorization is achieved via a histogram-guided color mapping module. We innovatively integrate a texture-aware diffusion model to enhance local detail recovery and introduce a joint optimization strategy comprising histogram consistency loss and chromatic fidelity loss to ensure accurate color reproduction. Extensive experiments on multiple benchmark datasets demonstrate that our method significantly outperforms state-of-the-art approaches, achieving notable improvements in PSNR and SSIM metrics. Qualitatively, the restored images exhibit more natural and photorealistic appearance. The source code is publicly available.
📝 Abstract
RAW-to-sRGB mapping, or the simulation of the traditional camera image signal processor (ISP), aims to generate DSLR-quality sRGB images from raw data captured by smartphone sensors. Despite achieving comparable results to sophisticated handcrafted camera ISP solutions, existing learning-based methods still struggle with detail disparity and color distortion. In this paper, we present ISPDiffuser, a diffusion-based decoupled framework that separates the RAW-to-sRGB mapping into detail reconstruction in grayscale space and color consistency mapping from grayscale to sRGB. Specifically, we propose a texture-aware diffusion model that leverages the generative ability of diffusion models to focus on local detail recovery, in which a texture enrichment loss is further proposed to prompt the diffusion model to generate more intricate texture details. Subsequently, we introduce a histogram-guided color consistency module that utilizes color histogram as guidance to learn precise color information for grayscale to sRGB color consistency mapping, with a color consistency loss designed to constrain the learned color information. Extensive experimental results show that the proposed ISPDiffuser outperforms state-of-the-art competitors both quantitatively and visually. The code is available at https://github.com/RenYangSCU/ISPDiffuser.