KLASSify to Verify: Audio-Visual Deepfake Detection Using SSL-based Audio and Handcrafted Visual Features

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor generalizability, high computational cost, and weak interpretability in audio-driven temporal deepfake video detection, this paper proposes a lightweight multimodal detection framework. Methodologically, it integrates self-supervised audio representations with handcrafted visual features, innovatively employs a graph attention network to model audio temporal dynamics, and designs a cross-modal alignment fusion strategy. Key contributions include: (1) significantly improved generalization to unseen deepfake generation techniques; (2) simultaneous achievement of high detection accuracy and inherent interpretability; and (3) support for unimodal (audio-only) temporal localization. Evaluated on the AV-Deepfake1M++ benchmark, the framework achieves 92.78% AUC for deepfake classification and an IoU of 0.3536 for audio-only temporal localization—demonstrating robustness, efficiency, and practical applicability.

Technology Category

Application Category

📝 Abstract
The rapid development of audio-driven talking head generators and advanced Text-To-Speech (TTS) models has led to more sophisticated temporal deepfakes. These advances highlight the need for robust methods capable of detecting and localizing deepfakes, even under novel, unseen attack scenarios. Current state-of-the-art deepfake detectors, while accurate, are often computationally expensive and struggle to generalize to novel manipulation techniques. To address these challenges, we propose multimodal approaches for the AV-Deepfake1M 2025 challenge. For the visual modality, we leverage handcrafted features to improve interpretability and adaptability. For the audio modality, we adapt a self-supervised learning (SSL) backbone coupled with graph attention networks to capture rich audio representations, improving detection robustness. Our approach strikes a balance between performance and real-world deployment, focusing on resilience and potential interpretability. On the AV-Deepfake1M++ dataset, our multimodal system achieves AUC of 92.78% for deepfake classification task and IoU of 0.3536 for temporal localization using only the audio modality.
Problem

Research questions and friction points this paper is trying to address.

Detect and localize sophisticated temporal deepfakes
Improve generalization to novel manipulation techniques
Balance performance and real-world deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Handcrafted visual features for interpretability
SSL-based audio with graph attention networks
Multimodal approach for robust deepfake detection
🔎 Similar Papers
No similar papers found.