Seeing Through the Blur: Unlocking Defocus Maps for Deepfake Detection

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI has enabled increasingly realistic face forgery and fully synthetic imagery, severely undermining visual content authenticity. To address this, we propose a deepfake detection framework grounded in defocus blur maps—leveraging the physical prior of depth-of-field effects inherent in optical imaging to expose inconsistencies in defocus blur distribution as forensic cues. Unlike conventional RGB- or frequency-domain features, our approach is the first to model interpretable, physics-based optical features—specifically, defocus blur maps—as universal, cross-model, and cross-scene detection signals. By learning depth-of-field consistency representations end-to-end, the model uncovers subtle physical distortions beyond RGB and spectral domains. Extensive experiments demonstrate state-of-the-art detection performance across diverse generative models—including StyleGAN and diffusion models—with strong generalization and robustness to domain shifts and compression artifacts.

Technology Category

Application Category

📝 Abstract
The rapid advancement of generative AI has enabled the mass production of photorealistic synthetic images, blurring the boundary between authentic and fabricated visual content. This challenge is particularly evident in deepfake scenarios involving facial manipulation, but also extends to broader AI-generated content (AIGC) cases involving fully synthesized scenes. As such content becomes increasingly difficult to distinguish from reality, the integrity of visual media is under threat. To address this issue, we propose a physically interpretable deepfake detection framework and demonstrate that defocus blur can serve as an effective forensic signal. Defocus blur is a depth-dependent optical phenomenon that naturally occurs in camera-captured images due to lens focus and scene geometry. In contrast, synthetic images often lack realistic depth-of-field (DoF) characteristics. To capture these discrepancies, we construct a defocus blur map and use it as a discriminative feature for detecting manipulated content. Unlike RGB textures or frequency-domain signals, defocus blur arises universally from optical imaging principles and encodes physical scene structure. This makes it a robust and generalizable forensic cue. Our approach is supported by three in-depth feature analyses, and experimental results confirm that defocus blur provides a reliable and interpretable cue for identifying synthetic images. We aim for our defocus-based detection pipeline and interpretability tools to contribute meaningfully to ongoing research in media forensics. The implementation is publicly available at: https://github.com/irissun9602/Defocus-Deepfake-Detection
Problem

Research questions and friction points this paper is trying to address.

Detecting AI-generated synthetic images and deepfakes
Leveraging defocus blur as a forensic signal for detection
Addressing lack of realistic depth-of-field in synthetic content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses defocus blur as forensic signal
Constructs defocus blur map for detection
Provides physically interpretable detection framework
🔎 Similar Papers
No similar papers found.
M
Minsun Jeon
Sungkyunkwan University, Computer Science & Engineering Dept., Suwon, Republic of Korea
Simon S. Woo
Simon S. Woo
Associate Professor, Sungkyunkwan University (SKKU)
Multimedia ForensicsMedia ForensicsDeepfakesAnomaly DetectionSatellite Systems