๐ค AI Summary
High-quality text-to-speech (TTS) training is hindered by narrow domain coverage, licensing constraints, and insufficient scale of authentic speech data; meanwhile, large language model (LLM)-generated text suffers from low lexical diversity, existing text normalization tools lack robustness, and human recording is not scalable. To address these challenges, we propose SpeechWeaveโthe first end-to-end, automated multilingual synthetic speech data generation framework. It integrates prompt-optimized LLM-based text generation, a high-accuracy configurable text normalization module, and standardized TTS synthesis. SpeechWeave enables customizable, cross-lingual and cross-domain speech corpus construction, improving phonemic and linguistic diversity by 10โ48%, achieving 97% text normalization accuracy, and producing highly consistent, TTS-optimized synthetic speech. The framework effectively alleviates the bottleneck imposed by real-world data limitations for large-scale TTS model training.
๐ Abstract
High-quality Text-to-Speech (TTS) model training requires extensive and diverse text and speech data. It is challenging to procure such data from real sources due to issues of domain specificity, licensing, and scalability. Large language models (LLMs) can certainly generate textual data, but they create repetitive text with insufficient variation in the prompt during the generation process. Another important aspect in TTS training data is text normalization. Tools for normalization might occasionally introduce anomalies or overlook valuable patterns, and thus impact data quality. Furthermore, it is also impractical to rely on voice artists for large scale speech recording in commercial TTS systems with standardized voices. To address these challenges, we propose SpeechWeave, a synthetic speech data generation pipeline that is capable of automating the generation of multilingual, domain-specific datasets for training TTS models. Our experiments reveal that our pipeline generates data that is 10-48% more diverse than the baseline across various linguistic and phonetic metrics, along with speaker-standardized speech audio while generating approximately 97% correctly normalized text. Our approach enables scalable, high-quality data generation for TTS training, improving diversity, normalization, and voice consistency in the generated datasets.