🤖 AI Summary
Existing document image shadow removal methods struggle with chromatic shadows, especially under non-uniform backgrounds. To address this, we propose the first latent-space diffusion model specifically designed for chromatic shadow removal. Our method introduces a soft shadow mask generation module and a mask-aware denoising mechanism, coupled with a shadow-robust perception loss and joint variational autoencoder-based latent-space modeling for efficiency and fidelity. The approach significantly improves detail preservation within shadowed regions and ensures chromatic consistency across the restored image. Extensive experiments demonstrate state-of-the-art performance on multiple benchmark datasets. Furthermore, we release SCSDRD—a large-scale synthetic chromatic shadow dataset comprising over 100,000 high-quality samples—to advance research in real-world chromatic shadow removal. Both source code and the dataset are publicly available.
📝 Abstract
Document shadow removal is a crucial task in the field of document image enhancement. However, existing methods tend to remove shadows with constant color background and ignore color shadows. In this paper, we first design a diffusion model in latent space for document image shadow removal, called DocShaDiffusion. It translates shadow images from pixel space to latent space, enabling the model to more easily capture essential features. To address the issue of color shadows, we design a shadow soft-mask generation module (SSGM). It is able to produce accurate shadow mask and add noise into shadow regions specially. Guided by the shadow mask, a shadow mask-aware guided diffusion module (SMGDM) is proposed to remove shadows from document images by supervising the diffusion and denoising process. We also propose a shadow-robust perceptual feature loss to preserve details and structures in document images. Moreover, we develop a large-scale synthetic document color shadow removal dataset (SDCSRD). It simulates the distribution of realistic color shadows and provides powerful supports for the training of models. Experiments on three public datasets validate the proposed method's superiority over state-of-the-art. Our code and dataset will be publicly available.