TRACE: Training-Free Partial Audio Deepfake Detection via Embedding Trajectory Analysis of Speech Foundation Models

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of detecting partially spoofed audio—where synthetic segments are spliced into genuine recordings—by proposing an unsupervised method that requires no training, annotations, or model modifications. The approach leverages frame-level embedding trajectories from frozen foundation speech models and quantifies representational continuity via first-order differences, treating temporal smoothness as a universal forensic cue to effectively identify abrupt discontinuities at splicing boundaries. This paradigm demonstrates strong cross-model generalization, achieving an equal error rate (EER) of 8.08% on the PartialSpoof dataset—comparable to supervised methods—and significantly outperforming supervised baselines on LLM-generated LlamaPartialSpoof with an EER of 24.12%.

Technology Category

Application Category

📝 Abstract
Partial audio deepfakes, where synthesized segments are spliced into genuine recordings, are particularly deceptive because most of the audio remains authentic. Existing detectors are supervised: they require frame-level annotations, overfit to specific synthesis pipelines, and must be retrained as new generative models emerge. We argue that this supervision is unnecessary. We hypothesize that speech foundation models implicitly encode a forensic signal: genuine speech forms smooth, slowly varying embedding trajectories, while splice boundaries introduce abrupt disruptions in frame-level transitions. Building on this, we propose TRACE (Training-free Representation-based Audio Countermeasure via Embedding dynamics), a training-free framework that detects partial audio deepfakes by analyzing the first-order dynamics of frozen speech foundation model representations without any training, labeled data, or architectural modification. We evaluate TRACE on four benchmarks that span two languages using six speech foundation models. In PartialSpoof, TRACE achieves 8.08% EER, competitive with fine-tuned supervised baselines. In LlamaPartialSpoof, the most challenging benchmark featuring LLM-driven commercial synthesis, TRACE surpasses a supervised baseline outright (24.12% vs. 24.49% EER) without any target-domain data. These results show that temporal dynamics in speech foundation models provide an effective, generalize signal for training-free audio forensics.
Problem

Research questions and friction points this paper is trying to address.

partial audio deepfake
splice detection
speech foundation models
training-free detection
audio forensics
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free
embedding trajectory
speech foundation models
partial audio deepfake
audio forensics
🔎 Similar Papers