X-AVDT: Audio-Visual Cross-Attention for Robust Deepfake Detection

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a robust deepfake video detection framework grounded in the internal mechanisms of generative models. Addressing the growing realism of deepfakes and the limited generalizability of current detectors to novel generators, the method leverages DDIM inversion to extract cross-modal alignment cues between audio and video streams. It reveals, for the first time, consistency signals implicitly embedded in the cross-attention mechanisms during generation. By integrating these insights with a multimodal discrepancy analysis, the approach effectively identifies forged content. To support this research, the authors introduce MMDF, a new multimodal deepfake dataset. Extensive experiments demonstrate that the proposed method significantly outperforms state-of-the-art techniques on MMDF and multiple external benchmarks, achieving a 13.1% accuracy gain and exhibiting strong generalization and robustness against unseen future generators.

Technology Category

Application Category

📝 Abstract
The surge of highly realistic synthetic videos produced by contemporary generative systems has significantly increased the risk of malicious use, challenging both humans and existing detectors. Against this backdrop, we take a generator-side view and observe that internal cross-attention mechanisms in these models encode fine-grained speech-motion alignment, offering useful correspondence cues for forgery detection. Building on this insight, we propose X-AVDT, a robust and generalizable deepfake detector that probes generator-internal audio-visual signals accessed via DDIM inversion to expose these cues. X-AVDT extracts two complementary signals: (i) a video composite capturing inversion-induced discrepancies, and (ii) an audio-visual cross-attention feature reflecting modality alignment enforced during generation. To enable faithful cross-generator evaluation, we further introduce MMDF, a new multimodal deepfake dataset spanning diverse manipulation types and rapidly evolving synthesis paradigms, including GANs, diffusion, and flow-matching. Extensive experiments demonstrate that X-AVDT achieves leading performance on MMDF and generalizes strongly to external benchmarks and unseen generators, outperforming existing methods with accuracy improved by 13.1%. Our findings highlight the importance of leveraging internal audio-visual consistency cues for robustness to future generators in deepfake detection.
Problem

Research questions and friction points this paper is trying to address.

deepfake detection
audio-visual alignment
synthetic video
generative models
multimodal forgery
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-attention
DDIM inversion
audio-visual alignment
deepfake detection
multimodal forgery
🔎 Similar Papers
No similar papers found.