🤖 AI Summary
Certain audio forgeries manipulate semantic keyframes while preserving overall auditory plausibility, significantly increasing detection difficulty. To address the limited temporal hierarchical modeling capability of existing methods, this paper proposes a three-level joint analysis framework—frame, segment, and full-audio. We design a dual-branch multi-scale discrepancy perception module to precisely localize abrupt boundaries induced by forgery; introduce a Frame-Audio Feature Aggregation Module (FA-FAM) to fuse local and global temporal information; and construct a Segment-level Multi-scale Discrepancy Awareness Module (SMDAM) to model anomalous inter-frame evolution. Extensive experiments on three challenging benchmark datasets demonstrate that our method achieves state-of-the-art performance in both forgery detection accuracy and localization precision. To the best of our knowledge, it is the first approach to enable collaborative modeling and joint optimization across multiple temporal granularities.
📝 Abstract
Recently, partial audio forgery has emerged as a new form of audio manipulation. Attackers selectively modify partial but semantically critical frames while preserving the overall perceptual authenticity, making such forgeries particularly difficult to detect. Existing methods focus on independently detecting whether a single frame is forged, lacking the hierarchical structure to capture both transient and sustained anomalies across different temporal levels. To address these limitations, We identify three key levels relevant to partial audio forgery detection and present T3-Tracer, the first framework that jointly analyzes audio at the frame, segment, and audio levels to comprehensively detect forgery traces. T3-Tracer consists of two complementary core modules: the Frame-Audio Feature Aggregation Module (FA-FAM) and the Segment-level Multi-Scale Discrepancy-Aware Module (SMDAM). FA-FAM is designed to detect the authenticity of each audio frame. It combines both frame-level and audio-level temporal information to detect intra-frame forgery cues and global semantic inconsistencies. To further refine and correct frame detection, we introduce SMDAM to detect forgery boundaries at the segment level. It adopts a dual-branch architecture that jointly models frame features and inter-frame differences across multi-scale temporal windows, effectively identifying abrupt anomalies that appeared on the forged boundaries. Extensive experiments conducted on three challenging datasets demonstrate that our approach achieves state-of-the-art performance.