Generalized Multilingual Text-to-Speech Generation with Language-Aware Style Adaptation

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual text-to-speech (TTS) faces two key challenges: cross-lingual phoneme discrepancies hinder acoustic modeling, and unified models struggle to capture language-specific prosody and speaking styles. To address these, we propose LanStyleTTS—the first non-autoregressive, unified multilingual TTS framework enabling fine-grained, phoneme-level language-aware style adaptation. Its core contributions are: (1) a standardized cross-lingual phoneme representation; (2) a language-conditioned, phoneme-granular style encoder for language-specific prosodic modeling; and (3) an autoencoder-driven latent acoustic representation replacing mel-spectrograms, balancing synthesis quality, inference efficiency, and model compactness. Evaluated on multiple state-of-the-art non-autoregressive backbones, LanStyleTTS achieves significant improvements in naturalness and phoneme accuracy while substantially reducing model size and computational cost—without compromising speech quality.

Technology Category

Application Category

📝 Abstract
Text-to-Speech (TTS) models can generate natural, human-like speech across multiple languages by transforming phonemes into waveforms. However, multilingual TTS remains challenging due to discrepancies in phoneme vocabularies and variations in prosody and speaking style across languages. Existing approaches either train separate models for each language, which achieve high performance at the cost of increased computational resources, or use a unified model for multiple languages that struggles to capture fine-grained, language-specific style variations. In this work, we propose LanStyleTTS, a non-autoregressive, language-aware style adaptive TTS framework that standardizes phoneme representations and enables fine-grained, phoneme-level style control across languages. This design supports a unified multilingual TTS model capable of producing accurate and high-quality speech without the need to train language-specific models. We evaluate LanStyleTTS by integrating it with several state-of-the-art non-autoregressive TTS architectures. Results show consistent performance improvements across different model backbones. Furthermore, we investigate a range of acoustic feature representations, including mel-spectrograms and autoencoder-derived latent features. Our experiments demonstrate that latent encodings can significantly reduce model size and computational cost while preserving high-quality speech generation.
Problem

Research questions and friction points this paper is trying to address.

Multilingual TTS struggles with phoneme and style variations across languages.
Existing methods trade performance for resources or lack language-specific style control.
Proposing a unified model with standardized phonemes and fine-grained style adaptation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardizes phoneme representations across languages
Enables phoneme-level style control in TTS
Uses latent encodings to reduce computational cost
🔎 Similar Papers
No similar papers found.
Haowei Lou
Haowei Lou
UNSW CSE
Brain Computer InferencceGenerative Artificial IntellgienceSpeech Generation
H
Hye-young Paik
School of Computer Science and Engineering, UNSW Sydney, Kensington 2033, Australia
S
Sheng Li
School of Engineering, Institute of Science Tokyo, Yokohama, Japan
W
Wen Hu
School of Computer Science and Engineering, UNSW Sydney, Kensington 2033, Australia
Lina Yao
Lina Yao
Science Lead at CSIRO Data61 & Professor at University of New South Wales, Australia
Machine LearningReinforcement LearningRecommender SystemsLLM AgentBrain Computer Interface