🤖 AI Summary
Dysarthric speech—characterized by high inter-speaker variability and slowed articulation—severely degrades automatic speech recognition (ASR) performance. To address this, we propose an unsupervised joint prosodic and acoustic transformation framework that maps dysarthric speech to temporally and spectrally normalized representations approximating healthy speech, thereby enhancing ASR robustness. Our key contributions are: (1) a novel syllable-level prosody modeling mechanism tailored to non-uniform segment durations and weakened rhythmic structure; and (2) an end-to-end, unpaired joint transformation built upon an extended RnV (Rhythm-and-Voice) framework, integrating LF-MMI acoustic modeling with Whisper fine-tuning, and incorporating syllable-level duration normalization and spectral mapping. On the TORGO dataset, the LF-MMI model achieves substantial WER reduction—up to 32.7% for severely dysarthric samples—while Whisper fine-tuning yields marginal gains, underscoring the critical role of explicit prosodic modeling.
📝 Abstract
Automatic speech recognition (ASR) systems struggle with dysarthric speech due to high inter-speaker variability and slow speaking rates. To address this, we explore dysarthric-to-healthy speech conversion for improved ASR performance. Our approach extends the Rhythm and Voice (RnV) conversion framework by introducing a syllable-based rhythm modeling method suited for dysarthric speech. We assess its impact on ASR by training LF-MMI models and fine-tuning Whisper on converted speech. Experiments on the Torgo corpus reveal that LF-MMI achieves significant word error rate reductions, especially for more severe cases of dysarthria, while fine-tuning Whisper on converted data has minimal effect on its performance. These results highlight the potential of unsupervised rhythm and voice conversion for dysarthric ASR. Code available at: https://github.com/idiap/RnV