🤖 AI Summary
To address poor text-to-speech (TTS) quality in low-resource languages—exemplified by Thai—stemming from data scarcity and phonological complexity (especially tonal variation), this paper proposes a high-fidelity TTS framework based on joint phoneme-tone modeling. Methodologically, it introduces three key innovations: (1) a novel decoupled encoding mechanism for phonemes and tones; (2) a data-efficient training paradigm integrating knowledge distillation, data augmentation, and a lightweight adaptive acoustic model; and (3) end-to-end prosody modeling enabling zero-shot voice cloning and cross-domain generalization. Evaluated on Thai, the system achieves a mean opinion score (MOS) of over 4.2 subjectively and outperforms mainstream baselines across all objective metrics. The framework has been successfully deployed across diverse real-world applications—including finance, healthcare, education, and legal domains—demonstrating scalability and robustness. This work establishes a practical, extensible technical pathway for high-quality TTS in low-resource tonal languages.
📝 Abstract
Text-to-speech (TTS) technology has achieved impressive results for widely spoken languages, yet many under-resourced languages remain challenged by limited data and linguistic complexities. In this paper, we present a novel methodology that integrates a data-optimized framework with an advanced acoustic model to build high-quality TTS systems for low-resource scenarios. We demonstrate the effectiveness of our approach using Thai as an illustrative case, where intricate phonetic rules and sparse resources are effectively addressed. Our method enables zero-shot voice cloning and improved performance across diverse client applications, ranging from finance to healthcare, education, and law. Extensive evaluations - both subjective and objective - confirm that our model meets state-of-the-art standards, offering a scalable solution for TTS production in data-limited settings, with significant implications for broader industry adoption and multilingual accessibility.