🤖 AI Summary
This work addresses the growing security threats posed by highly realistic synthetic faces generated by deepfake techniques, such as GANs and diffusion models, by proposing a novel detection framework with strong cross-model generalization. For the first time, the authors repurpose the discriminator of a generative model—specifically, a fine-tuned ProGAN discriminator—to extract scale-adaptive features for forgery detection, complemented by temporal consistency cues derived from diffusion models. The resulting approach avoids retraining the backbone network and significantly enhances robustness against unseen generative architectures and challenging samples. Evaluated across nine state-of-the-art generative models, the method achieves an average F1-score over 30% higher than existing Vision Transformer–based detectors, reaching up to 74.33%, and attains 88% on challenging benchmarks like CIFAKE, substantially outperforming current baselines.
📝 Abstract
The rapid progress of generative adversarial networks (GANs) and diffusion models has enabled the creation of synthetic faces that are increasingly difficult to distinguish from real images. This progress, however, has also amplified the risks of misinformation, fraud, and identity abuse, underscoring the urgent need for detectors that remain robust across diverse generative models. In this work, we introduce Counterfeit Image Pattern High-level Examination via Representation(CIPHER), a deepfake detection framework that systematically reuses and fine-tunes discriminators originally trained for image generation. By extracting scale-adaptive features from ProGAN discriminators and temporal-consistency features from diffusion models, CIPHER captures generation-agnostic artifacts that conventional detectors often overlook. Through extensive experiments across nine state-of-the-art generative models, CIPHER demonstrates superior cross-model detection performance, achieving up to 74.33% F1-score and outperforming existing ViT-based detectors by over 30% in F1-score on average. Notably, our approach maintains robust performance on challenging datasets where baseline methods fail, with up to 88% F1-score on CIFAKE compared to near-zero performance from conventional detectors. These results validate the effectiveness of discriminator reuse and cross-model fine-tuning, establishing CIPHER as a promising approach toward building more generalizable and robust deepfake detection systems in an era of rapidly evolving generative technologies.