SpeechWeave: Diverse Multilingual Synthetic Text & Audio Data Generation Pipeline for Training Text to Speech Models

๐Ÿ“… 2025-09-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
High-quality text-to-speech (TTS) training is hindered by narrow domain coverage, licensing constraints, and insufficient scale of authentic speech data; meanwhile, large language model (LLM)-generated text suffers from low lexical diversity, existing text normalization tools lack robustness, and human recording is not scalable. To address these challenges, we propose SpeechWeaveโ€”the first end-to-end, automated multilingual synthetic speech data generation framework. It integrates prompt-optimized LLM-based text generation, a high-accuracy configurable text normalization module, and standardized TTS synthesis. SpeechWeave enables customizable, cross-lingual and cross-domain speech corpus construction, improving phonemic and linguistic diversity by 10โ€“48%, achieving 97% text normalization accuracy, and producing highly consistent, TTS-optimized synthetic speech. The framework effectively alleviates the bottleneck imposed by real-world data limitations for large-scale TTS model training.

Technology Category

Application Category

๐Ÿ“ Abstract
High-quality Text-to-Speech (TTS) model training requires extensive and diverse text and speech data. It is challenging to procure such data from real sources due to issues of domain specificity, licensing, and scalability. Large language models (LLMs) can certainly generate textual data, but they create repetitive text with insufficient variation in the prompt during the generation process. Another important aspect in TTS training data is text normalization. Tools for normalization might occasionally introduce anomalies or overlook valuable patterns, and thus impact data quality. Furthermore, it is also impractical to rely on voice artists for large scale speech recording in commercial TTS systems with standardized voices. To address these challenges, we propose SpeechWeave, a synthetic speech data generation pipeline that is capable of automating the generation of multilingual, domain-specific datasets for training TTS models. Our experiments reveal that our pipeline generates data that is 10-48% more diverse than the baseline across various linguistic and phonetic metrics, along with speaker-standardized speech audio while generating approximately 97% correctly normalized text. Our approach enables scalable, high-quality data generation for TTS training, improving diversity, normalization, and voice consistency in the generated datasets.
Problem

Research questions and friction points this paper is trying to address.

Generating diverse multilingual text for TTS training
Automating text normalization to improve data quality
Producing scalable speaker-standardized synthetic speech audio
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multilingual synthetic data generation pipeline
Diverse text and speaker-standardized audio synthesis
High normalization accuracy and phonetic diversity enhancement
๐Ÿ”Ž Similar Papers
No similar papers found.