Analyzable Chain-of-Musical-Thought Prompting for High-Fidelity Music Generation

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoregressive music generation models rely on token-by-token prediction, diverging from human composers’ structural reasoning and consequently limiting musicality and coherence. This paper introduces MusiCoT—the first Chain-of-Thought (CoT) prompting framework tailored for music generation—guiding models to first plan global structure (e.g., sections, key, instrumentation) before generating audio tokens. Our method features: (1) a music-domain-specific structured CoT paradigm; (2) zero-shot, label-free structural interpretability analysis and variable-length style referencing via CLAP; and (3) end-to-end autoregressive audio token generation. Experiments demonstrate that MusiCoT matches state-of-the-art fidelity while significantly mitigating repetitive generation. Human evaluation confirms substantial improvements in both musicality and structural coherence.

Technology Category

Application Category

📝 Abstract
Autoregressive (AR) models have demonstrated impressive capabilities in generating high-fidelity music. However, the conventional next-token prediction paradigm in AR models does not align with the human creative process in music composition, potentially compromising the musicality of generated samples. To overcome this limitation, we introduce MusiCoT, a novel chain-of-thought (CoT) prompting technique tailored for music generation. MusiCoT empowers the AR model to first outline an overall music structure before generating audio tokens, thereby enhancing the coherence and creativity of the resulting compositions. By leveraging the contrastive language-audio pretraining (CLAP) model, we establish a chain of"musical thoughts", making MusiCoT scalable and independent of human-labeled data, in contrast to conventional CoT methods. Moreover, MusiCoT allows for in-depth analysis of music structure, such as instrumental arrangements, and supports music referencing -- accepting variable-length audio inputs as optional style references. This innovative approach effectively addresses copying issues, positioning MusiCoT as a vital practical method for music prompting. Our experimental results indicate that MusiCoT consistently achieves superior performance across both objective and subjective metrics, producing music quality that rivals state-of-the-art generation models. Our samples are available at https://MusiCoT.github.io/.
Problem

Research questions and friction points this paper is trying to address.

AR models misalign with human music creativity
Musicality compromised by next-token prediction
Lacks structure planning in music generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

MusiCoT uses chain-of-thought prompting for music
Leverages CLAP model for scalable musical thoughts
Supports audio referencing for style adaptation
🔎 Similar Papers
No similar papers found.
Max W. Y. Lam
Max W. Y. Lam
Kunlun Tech
Diffusion ModelsMusic GenerationSpeech SynthesisSource SeparationASR
Y
Yijin Xing
Kunlun Inc.
W
Weiya You
Kunlun Inc.
Jingcheng Wu
Jingcheng Wu
kunlun group
Deep learningAIGCMusic Intelligence
Z
Zongyu Yin
Kunlun Inc.
F
Fuqiang Jiang
Kunlun Inc.
Hangyu Liu
Hangyu Liu
Beijing University of Posts and Telecommunications
Large Language ModelEmbodied AI
F
Feng Liu
Kunlun Inc.
X
Xingda Li
Kunlun Inc.
W
Wei-Tsung Lu
Kunlun Inc.
H
Hanyu Chen
Kunlun Inc.
T
Tong Feng
Kunlun Inc.
T
Tianwei Zhao
Kunlun Inc.
C
Chien-Hung Liu
Kunlun Inc.
Xuchen Song
Xuchen Song
CTO @ Mureka.ai | Head of Multimodality & Spatial AI @ Skywork.ai
Music GenerationMultimodalityMultimodal UnderstandingMultimodal Generation
Y
Yang Li
Kunlun Inc.
Y
Yahui Zhou
Kunlun Inc.