🤖 AI Summary
Speech anonymization often retains residual speaker cues, posing privacy risks. To address insufficient de-anonymization attack capability, this paper proposes SegReConcat: a data augmentation method that segments speech at the word level, applies random or semantic-similarity-driven shuffling, and concatenates the reconstructed sequence with the original utterance—thereby enhancing the attacker’s ability to model long-term speaker identity cues. This paradigm strengthens speaker特征 learning from multiple perspectives without modifying either the anonymization system or the attacker’s architecture. Evaluated on the VoicePrivacy 2024 benchmark, SegReConcat significantly improves de-anonymization performance across five of seven state-of-the-art anonymization methods, demonstrating its effectiveness and generalizability. The approach offers a novel perspective for evaluating speech privacy robustness.
📝 Abstract
Anonymization of voice seeks to conceal the identity of the speaker while maintaining the utility of speech data. However, residual speaker cues often persist, which pose privacy risks. We propose SegReConcat, a data augmentation method for attacker-side enhancement of automatic speaker verification systems. SegReConcat segments anonymized speech at the word level, rearranges segments using random or similarity-based strategies to disrupt long-term contextual cues, and concatenates them with the original utterance, allowing an attacker to learn source speaker traits from multiple perspectives. The proposed method has been evaluated in the VoicePrivacy Attacker Challenge 2024 framework across seven anonymization systems, SegReConcat improves de-anonymization on five out of seven systems.