Lina-Speech: Gated Linear Attention is a Fast and Parameter-Efficient Learner for text-to-speech synthesis

📅 2024-10-30
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Existing Transformer-based voice cloning models suffer from limited context length, weak prosody modeling, and low inference efficiency. To address these issues, this paper proposes an efficient TTS framework based on Gated Linear Attention (GLA). Our method is the first to extend initial state tuning to voice cloning, enabling multi-segment long-speech input and full utilization of the context window. By replacing self-attention with GLA, we break the quadratic computational complexity bottleneck, supporting effective long-sequence modeling and high-throughput inference. Experiments demonstrate that, using only 3–15 minutes of target-speaker data, our approach matches the performance of full fine-tuning baselines and equals or surpasses state-of-the-art models with four times more parameters—while achieving significantly faster inference and substantially reduced deployment overhead.

Technology Category

Application Category

📝 Abstract
Neural codec language models have achieved state-of-the-art performance in text-to-speech (TTS) synthesis, leveraging scalable architectures like autoregressive transformers and large-scale speech datasets. By framing voice cloning as a prompt continuation task, these models excel at cloning voices from short audio samples. However, this approach is limited in its ability to handle numerous or lengthy speech excerpts, since the concatenation of source and target speech must fall within the maximum context length which is determined during training. In this work, we introduce Lina-Speech, a model that replaces traditional self-attention mechanisms with emerging recurrent architectures like Gated Linear Attention (GLA). Building on the success of initial-state tuning on RWKV, we extend this technique to voice cloning, enabling the use of multiple speech samples and full utilization of the context window in synthesis. This approach is fast, easy to deploy, and achieves performance comparable to fine-tuned baselines when the dataset size ranges from 3 to 15 minutes. Notably, Lina-Speech matches or outperforms state-of-the-art baseline models, including some with a parameter count up to four times higher or trained in an end-to-end style. We release our code and checkpoints. Audio samples are available at https://theodorblackbird.github.io/blog/demo_lina/.
Problem

Research questions and friction points this paper is trying to address.

Overcomes limited context length in voice cloning for diverse prosody and style
Addresses quadratic complexity of self-attention to improve inference throughput
Enables multi-sample conditioning for fine-grained emotion and prosody control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gated Linear Attention replaces self-attention for efficiency
Initial-State Tuning enables multi-sample conditioning
Recurrent architecture handles arbitrary length speech samples
🔎 Similar Papers
No similar papers found.