🤖 AI Summary
To address the challenge that voice spectral characteristics in telecom and cloud communication scenarios differ significantly from music—rendering existing audio fingerprinting techniques inadequate for voice retrieval and clustering—this paper pioneers a systematic adaptation of audio fingerprinting to speech tasks. We propose a speech-optimized robust fingerprint extraction method integrating time-frequency masking enhancement and efficient hashing indexing; and design an unsupervised acoustic similarity–based clustering algorithm that achieves semantic-level grouping without ASR transcription. Experiments demonstrate high retrieval accuracy under noisy and degraded speech conditions, over 10× faster clustering speed, and real-time, scalable deployment using CPU-only resources. Key contributions include: (1) speech-customized fingerprint modeling, (2) an ASR-free acoustic clustering paradigm, and (3) a highly efficient, pure-CPU service architecture.
📝 Abstract
Audio fingerprinting techniques have seen great advances in recent years, enabling accurate and fast audio retrieval even in conditions when the queried audio sample has been highly deteriorated or recorded in noisy conditions. Expectedly, most of the existing work is centered around music, with popular music identification services such as Apple’s Shazam or Google’s Now Playing designed for individual audio recognition on mobile devices. However, the spectral content of speech differs from that of music, necessitating modifications to current audio fingerprinting approaches. This paper offers fresh insights into adapting existing techniques to address the specialized challenge of speech retrieval in telecommunications and cloud communications platforms. The focus is on achieving rapid and accurate audio retrieval in batch processing instead of facilitating single requests, typically on a centralized server. Moreover, the paper demonstrates how this approach can be utilized to support audio clustering based on speech transcripts without undergoing actual speech-to-text conversion. This optimization enables significantly faster processing without the need for GPU computing, a requirement for real-time operation that is typically associated with state-of-the-art speech-to-text tools.