🤖 AI Summary
This work addresses zero-shot natural-style speech cloning for the ISCSLP 2024 CoVoC Challenge, proposing a zero-shot TTS system capable of synthesizing spontaneous, improvised speech. Methodologically, it introduces the first LLaMA-based latency-aware speech encoder; incorporates Classifier-Free Guidance (CFG) to enhance conditional controllability; integrates codec-based language modeling, speech tokenization, and fine-tuning on high-quality spontaneous speech data; and employs a customized preprocessing pipeline. In the official CoVoC constrained-track evaluation, the system achieves a naturalness MOS of 3.80—the highest score among all participants—while maintaining superior speech quality and speaker similarity, significantly outperforming baseline approaches. This work establishes a scalable, high-fidelity paradigm for zero-shot spontaneous-style speech synthesis.
📝 Abstract
This paper describes the zero-shot spontaneous style TTS system for the ISCSLP 2024 Conversational Voice Clone Chal-lenge (CoVoC). We propose a LLaMA-based codec language model with a delay pattern to achieve spontaneous style voice cloning. To improve speech intelligibility, we introduce the Classifier-Free Guidance (CFG) strategy in the language model to strengthen conditional guidance on token prediction. To gen-erate high-quality utterances, we adopt effective data preprocessing operations and fine-tune our model with selected high-quality spontaneous speech data. The official evaluations in the CoVoC constrained track show that our system achieves the best speech naturalness MOS of 3.80 and obtains considerable speech quality and speaker similarity results.