🤖 AI Summary
Image deblurring faces challenges including complex motion blur, difficulty in recovering fine details at high resolutions, and high computational overhead. This paper proposes a spatial-frequency dual-domain collaborative deblurring network. It innovatively integrates Vision Transformers (ViTs) with a learnable Fourier-domain FFT-ReLU module: ViTs model long-range spatial dependencies to capture global blur patterns, while the FFT-ReLU module introduces sparse, learnable nonlinearity in the frequency domain—explicitly regularizing frequency responses to suppress blur artifacts and preserve high-frequency details. This design establishes an organic bridge between spatial attention mechanisms and frequency-domain sparsity. The method achieves state-of-the-art performance on multiple benchmarks (e.g., GoPro, HIDE), with significant PSNR/SSIM improvements. Comprehensive evaluations—including quantitative metrics, qualitative analysis, and human perceptual assessment—demonstrate its superior visual quality and perceptual fidelity.
📝 Abstract
Image deblurring is vital in computer vision, aiming to recover sharp images from blurry ones caused by motion or camera shake. While deep learning approaches such as CNNs and Vision Transformers (ViTs) have advanced this field, they often struggle with complex or high-resolution blur and computational demands. We propose a new dual-domain architecture that unifies Vision Transformers with a frequency-domain FFT-ReLU module, explicitly bridging spatial attention modeling and frequency sparsity. In this structure, the ViT backbone captures local and global dependencies, while the FFT-ReLU component enforces frequency-domain sparsity to suppress blur-related artifacts and preserve fine details. Extensive experiments on benchmark datasets demonstrate that this architecture achieves superior PSNR, SSIM, and perceptual quality compared to state-of-the-art models. Both quantitative metrics, qualitative comparisons, and human preference evaluations confirm its effectiveness, establishing a practical and generalizable paradigm for real-world image restoration.