ParaStyleTTS: Toward Efficient and Robust Paralinguistic Style Control for Expressive Text-to-Speech Generation

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current text-to-speech (TTS) systems face a fundamental trade-off in paralinguistic style control (e.g., emotion, gender, age): reference-audio-based methods suffer from privacy and accessibility limitations, while large language model (LLM)-driven approaches incur high computational overhead, exhibit prompt sensitivity, and lack interpretability—hindering real-time or resource-constrained deployment. Method: We propose a two-level style-adaptive architecture that, for the first time, decouples prosody from paralinguistic style modeling. A lightweight neural network enables fine-grained, robust style control solely via textual prompts—eliminating reliance on both LLMs and reference audio. Contribution/Results: Our method achieves speech quality competitive with state-of-the-art LLM-based TTS systems, while accelerating inference by 30×, reducing parameter count by 8×, and cutting CUDA memory consumption by 2.5×. These gains significantly enhance practicality and edge-deployment feasibility.

Technology Category

Application Category

📝 Abstract
Controlling speaking style in text-to-speech (TTS) systems has become a growing focus in both academia and industry. While many existing approaches rely on reference audio to guide style generation, such methods are often impractical due to privacy concerns and limited accessibility. More recently, large language models (LLMs) have been used to control speaking style through natural language prompts; however, their high computational cost, lack of interpretability, and sensitivity to prompt phrasing limit their applicability in real-time and resource-constrained environments. In this work, we propose ParaStyleTTS, a lightweight and interpretable TTS framework that enables expressive style control from text prompts alone. ParaStyleTTS features a novel two-level style adaptation architecture that separates prosodic and paralinguistic speech style modeling. It allows fine-grained and robust control over factors such as emotion, gender, and age. Unlike LLM-based methods, ParaStyleTTS maintains consistent style realization across varied prompt formulations and is well-suited for real-world applications, including on-device and low-resource deployment. Experimental results show that ParaStyleTTS generates high-quality speech with performance comparable to state-of-the-art LLM-based systems while being 30x faster, using 8x fewer parameters, and requiring 2.5x less CUDA memory. Moreover, ParaStyleTTS exhibits superior robustness and controllability over paralinguistic speaking styles, providing a practical and efficient solution for style-controllable text-to-speech generation. Demo can be found at https://parastyletts.github.io/ParaStyleTTS_Demo/. Code can be found at https://github.com/haoweilou/ParaStyleTTS.
Problem

Research questions and friction points this paper is trying to address.

Efficient paralinguistic style control for expressive TTS
Robust text-based style modeling without reference audio
Lightweight interpretable framework for real-time style adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight TTS framework using text prompts
Two-level architecture separates prosodic and paralinguistic styles
Efficient design enables real-time and on-device deployment
🔎 Similar Papers
No similar papers found.