Realism to Deception: Investigating Deepfake Detectors Against Face Enhancement

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical vulnerability in deepfake detection: facial enhancement techniques—including conventional filters and GAN-based generative methods—improve visual quality but inadvertently distort biologically grounded forensic traces, thereby severely degrading detector robustness. We systematically evaluate three mainstream detection paradigms—naïve (pixel-level), spatial-domain, and frequency-domain—and find that basic enhancement reduces detection accuracy to 64.63%, while GAN-based enhancement further degrades performance to 75.12%. Crucially, we provide the first empirical evidence that facial enhancement functions as a stealthy anti-forensic attack. To counter this threat, we propose an adversarial training–based defense framework, demonstrating its effectiveness in enhancing model resilience against such enhancements. Our findings underscore a previously overlooked trade-off between aesthetic enhancement and forensic integrity, offering both a cautionary insight and a practical mitigation pathway for trustworthy deepfake detection deployment.

Technology Category

Application Category

📝 Abstract
Face enhancement techniques are widely used to enhance facial appearance. However, they can inadvertently distort biometric features, leading to significant decrease in the accuracy of deepfake detectors. This study hypothesizes that these techniques, while improving perceptual quality, can degrade the performance of deepfake detectors. To investigate this, we systematically evaluate whether commonly used face enhancement methods can serve an anti-forensic role by reducing detection accuracy. We use both traditional image processing methods and advanced GAN-based enhancements to evaluate the robustness of deepfake detectors. We provide a comprehensive analysis of the effectiveness of these enhancement techniques, focusing on their impact on Naïve, Spatial, and Frequency-based detection methods. Furthermore, we conduct adversarial training experiments to assess whether exposure to face enhancement transformations improves model robustness. Experiments conducted on the FaceForensics++, DeepFakeDetection, and CelebDF-v2 datasets indicate that even basic enhancement filters can significantly reduce detection accuracy achieving ASR up to 64.63%. In contrast, GAN-based techniques further exploit these vulnerabilities, achieving ASR up to 75.12%. Our results demonstrate that face enhancement methods can effectively function as anti-forensic tools, emphasizing the need for more resilient and adaptive forensic methods.
Problem

Research questions and friction points this paper is trying to address.

Face enhancement techniques reduce deepfake detector accuracy
Study evaluates enhancement methods as anti-forensic tools
GAN-based techniques exploit detector vulnerabilities most effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating face enhancement methods on deepfake detectors
Using traditional and GAN-based techniques for robustness assessment
Conducting adversarial training to improve model resilience
🔎 Similar Papers
No similar papers found.
Muhammad Saad Saeed
Muhammad Saad Saeed
University of Michigan
Computer VisionMultimodal Learning - Deep FakesContent Based Web Filtering
I
Ijaz Ul Haq
SMILES Lab, College of Innovation & Technology, University of Michigan-Flint, Flint, USA
K
Khalid Malik
SMILES Lab, College of Innovation & Technology, University of Michigan-Flint, Flint, USA