🤖 AI Summary
Existing codec-based TTS models suffer from generation instability, high repetition rates, and low fidelity—particularly in long-text synthesis and highly expressive voice cloning. To address these limitations, we propose MARS6: a compact (70M-parameter), robust hierarchical codec-TTS model. Its core innovations include the first-ever 12-Hz low-frequency hierarchical decoding architecture, a hierarchical Transformer backbone, layered speech token modeling, and dedicated training objectives for de-repetition and stability enhancement, coupled with an efficient inference scheduling strategy. Experiments demonstrate that MARS6 achieves significant improvements over state-of-the-art methods in zero-shot voice cloning fidelity, long-text repetition suppression, and inference speed. Both objective metrics (MOS/SIM) and subjective human evaluations confirm its new state-of-the-art performance.
📝 Abstract
Codec-based text-to-speech (TTS) models have shown impressive quality with zero-shot voice cloning abilities. However, they often struggle with more expressive references or complex text inputs. We present MARS6, a robust encoder-decoder transformer for rapid, expressive TTS. MARS6 is built on recent improvements in spoken language modelling. Utilizing a hierarchical setup for its decoder, new speech tokens are processed at a rate of only 12 Hz, enabling efficient modelling of long-form text while retaining reconstruction quality. We combine several recent training and inference techniques to reduce repetitive generation and improve output stability and quality. This enables the 70M-parameter MARS6 to achieve similar performance to models many times larger. We show this in objective and subjective evaluations, comparing TTS output quality and reference speaker cloning ability. Project page: https://camb-ai.github.io/mars6-turbo/