SpeechForensics: Audio-Visual Speech Representation Learning for Face Forgery Detection

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Face forgery video detection remains challenging in cross-dataset generalization and robustness to common perturbations. To address this, we propose an audio-visual speech representation-driven self-supervised forgery detection framework. Our method leverages the implicit constraint that audio signals impose on facial dynamics in authentic videos, enabling cross-domain detection without requiring any forged training samples. We design a self-supervised masked prediction task that jointly models local and global semantics across audio and visual modalities, thereby learning discriminative cross-modal representations. These representations are then transferred to the forgery detection task. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods on multiple benchmarks. Notably, it exhibits superior generalization to unseen datasets and strong robustness against common distortions—including compression, blurring, and filtering—without fine-tuning. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Detection of face forgery videos remains a formidable challenge in the field of digital forensics, especially the generalization to unseen datasets and common perturbations. In this paper, we tackle this issue by leveraging the synergy between audio and visual speech elements, embarking on a novel approach through audio-visual speech representation learning. Our work is motivated by the finding that audio signals, enriched with speech content, can provide precise information effectively reflecting facial movements. To this end, we first learn precise audio-visual speech representations on real videos via a self-supervised masked prediction task, which encodes both local and global semantic information simultaneously. Then, the derived model is directly transferred to the forgery detection task. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of cross-dataset generalization and robustness, without the participation of any fake video in model training. Code is available at https://github.com/Eleven4AI/SpeechForensics.
Problem

Research questions and friction points this paper is trying to address.

Detecting face forgery videos with cross-dataset generalization
Leveraging audio-visual speech for improved forgery detection
Self-supervised learning for robust forgery detection without fake data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages audio-visual speech representation learning
Uses self-supervised masked prediction task
Transfers model to forgery detection directly
🔎 Similar Papers
No similar papers found.
Y
Yachao Liang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
M
Min Yu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
G
Gang Li
Deakin University
J
Jianguo Jiang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
B
Boquan Li
Harbin Engineering University
Feng Yu
Feng Yu
University of Exeter
Efficient AIContinual LearningFederated LearningFoundation Model
N
Ning Zhang
Institute of Forensic Science, Ministry of Public Security
Xiang Meng
Xiang Meng
MIT Operation Research Center
OptimizationQuantum computing
W
Weiqing Huang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences