🤖 AI Summary
Text-to-song generation faces challenges including domain complexity, data scarcity, and inefficiency of multi-stage pipelines. This paper proposes the first end-to-end, single-stage autoregressive Transformer model that jointly models vocals and accompaniment, enabling fine-grained control over lyrics, genre, emotion, timbre, and other attributes; voice cloning is supported via an optional 3-second reference audio. We introduce two novel output modes—hybrid output and dual-track output—to replace conventional cascaded frameworks. The method integrates dynamic tokenization, multi-condition embedding, and automated high-quality data preprocessing. Experiments demonstrate substantial improvements in audio fidelity, musical coherence, and controllability. To foster reproducibility and community advancement, we fully open-source the code, model weights, annotated datasets, preprocessing tools, and perceptually evaluated audio samples.
📝 Abstract
Text-to-song generation, the task of creating vocals and accompaniment from textual inputs, poses significant challenges due to domain complexity and data scarcity. Existing approaches often employ multi-stage generation procedures, resulting in cumbersome training and inference pipelines. In this paper, we propose SongGen, a fully open-source, single-stage auto-regressive transformer designed for controllable song generation. The proposed model facilitates fine-grained control over diverse musical attributes, including lyrics and textual descriptions of instrumentation, genre, mood, and timbre, while also offering an optional three-second reference clip for voice cloning. Within a unified auto-regressive framework, SongGen supports two output modes: mixed mode, which generates a mixture of vocals and accompaniment directly, and dual-track mode, which synthesizes them separately for greater flexibility in downstream applications. We explore diverse token pattern strategies for each mode, leading to notable improvements and valuable insights. Furthermore, we design an automated data preprocessing pipeline with effective quality control. To foster community engagement and future research, we will release our model weights, training code, annotated data, and preprocessing pipeline. The generated samples are showcased on our project page at https://liuzh-19.github.io/SongGen/ , and the code will be available at https://github.com/LiuZH-19/SongGen .