🤖 AI Summary
To address the trade-off between model interpretability and detection performance in deepfake speech detection, this paper introduces a Top-K sparse activation mechanism into the embedding layer of the AASIST architecture, proposing a latent-space sparse disentangled representation method. Inspired by sparse autoencoders, the approach enforces controllable sparsity to compel the model to focus exclusively on discriminative spoofing features, thereby achieving disentangled encoding of attack patterns in the latent space. To quantitatively evaluate disentanglement quality, we design integrity and modularity metrics based on mutual information. Experimental results on the ASVSpoof2021 LA and DF evaluation sets demonstrate an EER of 23.36% with 95% sparsity—significantly improving both detection accuracy and interpretability while preserving model lightweightness.
📝 Abstract
Due to the rapid progress of speech synthesis, deepfake detection has become a major concern in the speech processing community. Because it is a critical task, systems must not only be efficient and robust, but also provide interpretable explanations. Among the different approaches for explainability, we focus on the interpretation of latent representations. In such paper, we focus on the last layer of embeddings of AASIST, a deepfake detection architecture. We use a TopK activation inspired by SAEs on this layer to obtain sparse representations which are used in the decision process. We demonstrate that sparse deepfake detection can improve detection performance, with an EER of 23.36% on ASVSpoof5 test set, with 95% of sparsity. We then show that these representations provide better disentanglement, using completeness and modularity metrics based on mutual information. Notably, some attacks are directly encoded in the latent space.