🤖 AI Summary
This work addresses the challenge of robust audio fingerprinting and retrieval for ultra-short clips (3 seconds). We propose a self-supervised contrastive learning framework based on the Conformer architecture. The method employs multi-scale feature modeling and time-aware data augmentation to learn embedding representations invariant to temporal shifts, additive noise, reverberation, and extreme time-stretching distortions. Our key contribution is a novel temporal robustness constraint mechanism that significantly enhances cross-scenario generalization. Evaluated on multiple standard audio retrieval benchmarks, the approach achieves state-of-the-art performance. Crucially, it operates directly on 3-second segments, yielding compact, efficient, and fully reproducible embeddings. All code and pre-trained models are fully open-sourced.
📝 Abstract
Conformers have shown great results in speech processing due to their ability to capture both local and global interactions. In this work, we utilize a self-supervised contrastive learning framework to train conformer-based encoders that are capable of generating unique embeddings for small segments of audio, generalizing well to previously unseen data. We achieve state-of-the-art results for audio retrieval tasks while using only 3 seconds of audio to generate embeddings. Our models are almost completely immune to temporal misalignments and achieve state-of-the-art results in cases of other audio distortions such as noise, reverb or extreme temporal stretching. Code and models are made publicly available and the results are easy to reproduce as we train and test using popular and freely available datasets of different sizes.