🤖 AI Summary
Deepfake detectors suffer from high predictive uncertainty, frequent misclassifications, and low reliability due to the diversity of generative models. To address this, this paper presents the first systematic study of uncertainty modeling and analysis in deepfake detection. We propose a multi-granularity uncertainty quantification framework based on Bayesian neural networks and Monte Carlo Dropout, explicitly decoupling aleatoric and epistemic uncertainty. Further, we introduce an uncertainty manifold for forgery source identification and generate pixel-level uncertainty heatmaps to localize generator-specific artifacts. Evaluated on two benchmark datasets covering nine state-of-the-art generative models, our approach significantly improves detector calibration, adversarial robustness, and cross-generator generalization. This work establishes a novel paradigm for building trustworthy synthetic media authentication systems.
📝 Abstract
As generative models are advancing in quality and quantity for creating synthetic content, deepfakes begin to cause online mistrust. Deepfake detectors are proposed to counter this effect, however, misuse of detectors claiming fake content as real or vice versa further fuels this misinformation problem. We present the first comprehensive uncertainty analysis of deepfake detectors, systematically investigating how generative artifacts influence prediction confidence. As reflected in detectors' responses, deepfake generators also contribute to this uncertainty as their generative residues vary, so we cross the uncertainty analysis of deepfake detectors and generators. Based on our observations, the uncertainty manifold holds enough consistent information to leverage uncertainty for deepfake source detection. Our approach leverages Bayesian Neural Networks and Monte Carlo dropout to quantify both aleatoric and epistemic uncertainties across diverse detector architectures. We evaluate uncertainty on two datasets with nine generators, with four blind and two biological detectors, compare different uncertainty methods, explore region- and pixel-based uncertainty, and conduct ablation studies. We conduct and analyze binary real/fake, multi-class real/fake, source detection, and leave-one-out experiments between the generator/detector combinations to share their generalization capability, model calibration, uncertainty, and robustness against adversarial attacks. We further introduce uncertainty maps that localize prediction confidence at the pixel level, revealing distinct patterns correlated with generator-specific artifacts. Our analysis provides critical insights for deploying reliable deepfake detection systems and establishes uncertainty quantification as a fundamental requirement for trustworthy synthetic media detection.