MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training

📅 2023-05-31
🏛️ International Conference on Learning Representations
📈 Citations: 75
Influential: 11
📄 PDF
🤖 AI Summary
To address weak pitch and tonality modeling in music audio understanding and the lack of music-aware priors in existing self-supervised learning (SSL) methods, this paper introduces MERT, a music-specific large-scale self-supervised pretraining framework. Its core innovation is a novel dual-teacher mechanism—combining acoustic modeling via RVQ-VAE and music-perceptual modeling via Constant-Q Transform (CQT)—to enable stable, high-fidelity masked language modeling (MLM) pseudo-label generation. This design overcomes the training instability prevalent in audio foundation models, supporting scalable pretraining across 95M–330M parameter regimes. Evaluated on 14 diverse music understanding tasks, MERT achieves state-of-the-art performance across the board, significantly outperforming general-purpose speech/audio SSL approaches. It establishes the first self-supervised paradigm for music representation learning that jointly ensures acoustic fidelity and music-theoretic consistency.
📝 Abstract
Self-supervised learning (SSL) has recently emerged as a promising paradigm for training generalisable models on large-scale data in the fields of vision, text, and speech. Although SSL has been proven effective in speech and audio, its application to music audio has yet to be thoroughly explored. This is partially due to the distinctive challenges associated with modelling musical knowledge, particularly tonal and pitched characteristics of music. To address this research gap, we propose an acoustic Music undERstanding model with large-scale self-supervised Training (MERT), which incorporates teacher models to provide pseudo labels in the masked language modelling (MLM) style acoustic pre-training. In our exploration, we identified an effective combination of teacher models, which outperforms conventional speech and audio approaches in terms of performance. This combination includes an acoustic teacher based on Residual Vector Quantisation - Variational AutoEncoder (RVQ-VAE) and a musical teacher based on the Constant-Q Transform (CQT). Furthermore, we explore a wide range of settings to overcome the instability in acoustic language model pre-training, which allows our designed paradigm to scale from 95M to 330M parameters. Experimental results indicate that our model can generalise and perform well on 14 music understanding tasks and attain state-of-the-art (SOTA) overall scores.
Problem

Research questions and friction points this paper is trying to address.

self-supervised learning
music audio understanding
pitch and tone recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised Learning
Music Understanding
Pre-training Scaling
🔎 Similar Papers
No similar papers found.
Yizhi Li
Yizhi Li
University of Manchester, M-A-P
LLMReasoningPost-trainingComputational Music
Ruibin Yuan
Ruibin Yuan
HKUST
Artificial IntelligenceMusic GenerationMusic Information RetrievalComputer Music
G
Ge Zhang
Hong Kong University of Science and Technology, University of Waterloo, Beijing Academy of Artificial Intelligence
Y
Yi Ma
X
Xingran Chen
H
Hanzhi Yin
Carnegie Mellon University
C
Chen-Li Lin
University of Manchester, University of Sheffield
A
A. Ragni
University of Sheffield
Emmanouil Benetos
Emmanouil Benetos
Queen Mary University of London
Machine listeningAudio signal processingMusic information retrievalMachine learning
N
N. Gyenge
University of Sheffield
R
R. Dannenberg
Carnegie Mellon University
Ruibo Liu
Ruibo Liu
RS @Google DeepMind
ASI
Wenhu Chen
Wenhu Chen
Assistant Professor at University of Waterloo
Natural Language ProcessingArtificial IntelligenceDeep Learning
G
Gus G. Xia
MBZUAI, New York University
Yemin Shi
Yemin Shi
Dynamics Lab
Realtime InteractionMulti-modality ModelWorld Model
W
Wen-Fen Huang
Beijing Academy of Artificial Intelligence
Y
Yi-Ting Guo
J
Jie Fu
Hong Kong University of Science and Technology, Beijing Academy of Artificial Intelligence