🤖 AI Summary
Existing Transformer-based voice cloning models suffer from limited context length, weak prosody modeling, and low inference efficiency. To address these issues, this paper proposes an efficient TTS framework based on Gated Linear Attention (GLA). Our method is the first to extend initial state tuning to voice cloning, enabling multi-segment long-speech input and full utilization of the context window. By replacing self-attention with GLA, we break the quadratic computational complexity bottleneck, supporting effective long-sequence modeling and high-throughput inference. Experiments demonstrate that, using only 3–15 minutes of target-speaker data, our approach matches the performance of full fine-tuning baselines and equals or surpasses state-of-the-art models with four times more parameters—while achieving significantly faster inference and substantially reduced deployment overhead.
📝 Abstract
Neural codec language models have achieved state-of-the-art performance in text-to-speech (TTS) synthesis, leveraging scalable architectures like autoregressive transformers and large-scale speech datasets. By framing voice cloning as a prompt continuation task, these models excel at cloning voices from short audio samples. However, this approach is limited in its ability to handle numerous or lengthy speech excerpts, since the concatenation of source and target speech must fall within the maximum context length which is determined during training. In this work, we introduce Lina-Speech, a model that replaces traditional self-attention mechanisms with emerging recurrent architectures like Gated Linear Attention (GLA). Building on the success of initial-state tuning on RWKV, we extend this technique to voice cloning, enabling the use of multiple speech samples and full utilization of the context window in synthesis. This approach is fast, easy to deploy, and achieves performance comparable to fine-tuned baselines when the dataset size ranges from 3 to 15 minutes. Notably, Lina-Speech matches or outperforms state-of-the-art baseline models, including some with a parameter count up to four times higher or trained in an end-to-end style. We release our code and checkpoints. Audio samples are available at https://theodorblackbird.github.io/blog/demo_lina/.