🤖 AI Summary
This work addresses the challenge of learning robust spatial representations from unlabeled binaural audio. We propose a self-supervised pre-training framework based on feature distillation, wherein clean binaural speech serves as the teacher signal to generate spatially consistent pseudo-labels; the student network is then trained via self-supervised distillation using augmented (noisy/reverberant) binaural speech—without any manual annotation. Our key contribution is an end-to-end, label-free spatial feature distillation mechanism that effectively decouples acoustic distortions from intrinsic spatial cues. In downstream direction-of-arrival (DOA) estimation, fine-tuned models significantly outperform fully supervised baselines and conventional signal processing methods under severe noise and high reverberation conditions, demonstrating the spatial robustness and generalization capability of the learned representations.
📝 Abstract
Recently, deep representation learning has shown strong performance in multiple audio tasks. However, its use for learning spatial representations from multichannel audio is underexplored. We investigate the use of a pretraining stage based on feature distillation to learn a robust spatial representation of binaural speech without the need for data labels. In this framework, spatial features are computed from clean binaural speech samples to form prediction labels. These clean features are then predicted from corresponding augmented speech using a neural network. After pretraining, we throw away the spatial feature predictor and use the learned encoder weights to initialize a DoA estimation model which we fine-tune for DoA estimation. Our experiments demonstrate that the pretrained models show improved performance in noisy and reverberant environments after fine-tuning for direction-of-arrival estimation, when compared to fully supervised models and classic signal processing methods.