Let Network Decide What to Learn: Symbolic Music Understanding Model Based on Large-scale Adversarial Pre-training

📅 2024-07-11
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address context inference bias and overfitting in Symbolic Music Understanding (SMU) caused by random token masking in Masked Language Modeling (MLM), this paper proposes an adversarial dynamic masking pre-training framework. The core innovation is a learnable masking decision network that replaces conventional random masking, selectively avoiding the masking of non-inferable tokens to enhance the model’s capacity to capture musical structural relationships and improve generalization. Built upon the MidiBERT architecture, our method integrates adversarial masking, adaptive MLM pre-training, and multi-task fine-tuning. Evaluated on four diverse SMU downstream tasks—including music generation, transcription, classification, and alignment—it consistently outperforms strong baselines, achieving significant performance gains. The framework demonstrates superior robustness and transferability across musical domains. All code and models are publicly released.

Technology Category

Application Category

📝 Abstract
As a crucial aspect of Music Information Retrieval (MIR), Symbolic Music Understanding (SMU) has garnered significant attention for its potential to assist both musicians and enthusiasts in learning and creating music. Recently, pre-trained language models have been widely adopted in SMU due to the substantial similarities between symbolic music and natural language, as well as the ability of these models to leverage limited music data effectively. However, some studies have shown the common pre-trained methods like Mask Language Model (MLM) may introduce bias issues like racism discrimination in Natural Language Process (NLP) and affects the performance of downstream tasks, which also happens in SMU. This bias often arises when masked tokens cannot be inferred from their context, forcing the model to overfit the training set instead of generalizing. To address this challenge, we propose Adversarial-MidiBERT for SMU, which adaptively determines what to mask during MLM via a masker network, rather than employing random masking. By avoiding the masking of tokens that are difficult to infer from context, our model is better equipped to capture contextual structures and relationships, rather than merely conforming to the training data distribution. We evaluate our method across four SMU tasks, and our approach demonstrates excellent performance in all cases. The code for our model is publicly available at https://github.com/RS2002/Adversarial-MidiBERT.
Problem

Research questions and friction points this paper is trying to address.

Pre-trained Language Models
Bias Issue
Symbolic Music Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial MidiBERT
Adaptive Content Selection
Symbolic Music Understanding
🔎 Similar Papers
No similar papers found.
Z
Zijian Zhao
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China