🤖 AI Summary
As generative video quality improves, deepfakes become increasingly indistinguishable from authentic content to human observers, while existing detectors suffer from poor interpretability and high error rates. To address this, we propose ExDDV—the first explainable benchmark for video deepfake detection—comprising approximately 5.4K real/forged videos annotated with fine-grained natural-language explanations and click-based spatial localization labels. We formally define the video-level explainable deepfake detection task and introduce dual-modal supervision (textual and click-based) to jointly optimize forgery localization and semantic explanation. Leveraging vision-language models, our framework integrates parameter-efficient fine-tuning with in-context learning for multi-granularity joint training. Extensive experiments demonstrate that both supervision signals are indispensable: our model achieves precise spatial localization of forged regions and generates coherent, human-aligned textual explanations. The dataset, code, and trained models are publicly released.
📝 Abstract
The ever growing realism and quality of generated videos makes it increasingly harder for humans to spot deepfake content, who need to rely more and more on automatic deepfake detectors. However, deepfake detectors are also prone to errors, and their decisions are not explainable, leaving humans vulnerable to deepfake-based fraud and misinformation. To this end, we introduce ExDDV, the first dataset and benchmark for Explainable Deepfake Detection in Video. ExDDV comprises around 5.4K real and deepfake videos that are manually annotated with text descriptions (to explain the artifacts) and clicks (to point out the artifacts). We evaluate a number of vision-language models on ExDDV, performing experiments with various fine-tuning and in-context learning strategies. Our results show that text and click supervision are both required to develop robust explainable models for deepfake videos, which are able to localize and describe the observed artifacts. Our novel dataset and code to reproduce the results are available at https://github.com/vladhondru25/ExDDV.