🤖 AI Summary
This work addresses the limited generalization of existing deepfake detection methods in handling diverse forgery techniques. To this end, we propose Face2Parts, a novel approach that leverages hierarchical feature representation (HFR) to extract features progressively from video frames, full faces, down to key facial regions—specifically lips, eyes, and nose. By integrating channel attention mechanisms with deep triplet learning, our method effectively models interdependencies across multiple granularities of facial regions. Extensive experiments demonstrate that Face2Parts achieves state-of-the-art performance across eight benchmark datasets, yielding average AUC scores of 98.42%, 79.80%, 85.34%, 89.41%, 84.07%, 95.62%, 80.76%, and 100% on FF++, CDF1, CDF2, DFD, DFDC, DTIM, PDD, and WLDR, respectively, thereby significantly enhancing adaptability to various deepfake types.
📝 Abstract
Multimedia data, particularly images and videos, is integral to various applications, including surveillance, visual interaction, biometrics, evidence gathering, and advertising. However, amateur or skilled counterfeiters can simulate them to create deepfakes, often for slanderous motives. To address this challenge, several forensic methods have been developed to ensure the authenticity of the content. The effectiveness of these methods depends on their focus, with challenges arising from the diverse nature of manipulations. In this article, we analyze existing forensic methods and observe that each method has unique strengths in detecting deepfake traces by focusing on specific facial regions, such as the frame, face, lips, eyes, or nose. Considering these insights, we propose a novel hybrid approach called Face2Parts based on hierarchical feature representation ($HFR$) that takes advantage of coarse-to-fine information to improve deepfake detection. The proposed method involves extracting features from the frame, face, and key facial regions (i.e., lips, eyes, and nose) separately to explore the coarse-to-fine relationships. This approach enables us to capture inter-dependencies among facial regions using a channel-attention mechanism and deep triplet learning. We evaluated the proposed method on benchmark deepfake datasets in both intra-, inter-dataset, and inter-manipulation settings. The proposed method achieves an average AUC of 98.42\% on FF++, 79.80\% on CDF1, 85.34\% on CDF2, 89.41\% on DFD, 84.07\% on DFDC, 95.62\% on DTIM, 80.76\% on PDD, and 100\% on WLDR, respectively. The results demonstrate that our approach generalizes effectively and achieves promising performance to outperform the existing methods.