MATPAC++: Enhanced Masked Latent Prediction for Self-Supervised Audio Representation Learning

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In self-supervised audio representation learning, existing predictor modules inadequately model the inherent ambiguity of multi-source audio, limiting pretext task generalization. To address this, we propose the first integration of Multi-Choice Learning (MCL) into audio SSL frameworks, explicitly capturing predictive ambiguity in masked latent representations. Our approach unifies MCL with the MATPAC architecture, forming a Masked Autoencoding–Multi-Choice joint pretraining paradigm. This reformulates the pretraining objective to better reflect real-world acoustic uncertainty. We evaluate under a unified protocol using linear probing and AudioSet fine-tuning. Experiments demonstrate state-of-the-art performance on AudioSet and multiple downstream tasks—including speech, environmental sound, and music classification. Notably, even when trained exclusively on pure music data, our method yields highly competitive and generalizable audio representations, surpassing prior methods in transferability and robustness.

Technology Category

Application Category

📝 Abstract
Masked latent prediction has emerged as a leading paradigm in self-supervised learning (SSL), especially for general audio and music representation learning. While recent methods have demonstrated strong performance, the role of the predictor module used at the output of such SSL systems remains mainly overlooked, despite being crucial for solving the pretext task at hand. In particular, this module should be able to deal with the ambiguity inherent in audio content, especially when it is composed of multiple sound sources. This work proposes a novel enhancement: integrating Multiple Choice Learning (MCL) to explicitly model prediction ambiguity and improve representation quality. We build on top of the recently proposed MATPAC system, improving its prediction and unsupervised classification pretext tasks with MCL. We extensively evaluate our method, MATPAC++, through both linear probing across multiple downstream tasks and fine-tuning on AudioSet, employing a unified protocol that enables rigorous and fair comparisons with state-of-the-art SSL approaches. Results show that our proposal achieves state-of-the-art when fine-tuned on AudioSet and overall state-of-the-art scores on downstream tasks. Additionally, we examine domain specialisation by training exclusively on music data, where our model achieves state-of-the-art performance with significantly improved efficiency.
Problem

Research questions and friction points this paper is trying to address.

Enhancing predictor module in SSL for audio ambiguity
Improving representation quality with Multiple Choice Learning
Achieving state-of-the-art performance in audio and music tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced masked latent prediction with MCL
Improved prediction and classification tasks
State-of-the-art performance on AudioSet
🔎 Similar Papers
No similar papers found.
A
Aurian Quelennec
LTCI, Télécom Paris, Institut Polytechnique de Paris
P
Pierre Chouteau
LTCI, Télécom Paris, Institut Polytechnique de Paris
Geoffroy Peeters
Geoffroy Peeters
Télécom Paris (previously IRCAM - STMS)
audio signal processingmachine learningmusic information retrieval
Slim Essid
Slim Essid
NVIDIA
Machine LearningAIMultimodal Language ModelsMIRAudio and Speech Processing