DyPCL: Dynamic Phoneme-level Contrastive Learning for Dysarthric Speech Recognition

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Speech recognition for dysarthric speech suffers from poor cross-speaker robustness and significant performance degradation due to speaker-specific severity variations and distributional shift relative to neurotypical speech. Method: We propose a dynamic phoneme-level contrastive learning framework comprising: (1) phoneme-granularity dynamic segmentation via CTC alignment; (2) a phoneme-similarity-guided progressive hard negative curriculum learning strategy; and (3) joint optimization of phoneme-level contrastive loss with end-to-end ASR. Contribution/Results: This work introduces, for the first time, a dynamic phoneme-level contrastive mechanism and a difficulty-adaptive curriculum learning paradigm. On the UASpeech corpus, our method achieves a 22.10% relative reduction in average word error rate across dysarthric speakers, markedly mitigating inter-speaker variability and enhancing model generalization.

Technology Category

Application Category

📝 Abstract
Dysarthric speech recognition often suffers from performance degradation due to the intrinsic diversity of dysarthric severity and extrinsic disparity from normal speech. To bridge these gaps, we propose a Dynamic Phoneme-level Contrastive Learning (DyPCL) method, which leads to obtaining invariant representations across diverse speakers. We decompose the speech utterance into phoneme segments for phoneme-level contrastive learning, leveraging dynamic connectionist temporal classification alignment. Unlike prior studies focusing on utterance-level embeddings, our granular learning allows discrimination of subtle parts of speech. In addition, we introduce dynamic curriculum learning, which progressively transitions from easy negative samples to difficult-to-distinguishable negative samples based on phonetic similarity of phoneme. Our approach to training by difficulty levels alleviates the inherent variability of speakers, better identifying challenging speeches. Evaluated on the UASpeech dataset, DyPCL outperforms baseline models, achieving an average 22.10% relative reduction in word error rate (WER) across the overall dysarthria group.
Problem

Research questions and friction points this paper is trying to address.

Dysarthric Speech Recognition
Individual Variability
Speech Impairment
Innovation

Methods, ideas, or system contributions that make the work stand out.

DyPCL
phoneme-level contrastive learning
speech recognition for disordered speech
🔎 Similar Papers
No similar papers found.
W
Wonjun Lee
Department of Computer Science and Engineering, POSTECH, Republic of Korea
Solee Im
Solee Im
POSTECH
AINLPSpeech Recognition
Heejin Do
Heejin Do
Postdoctoral Fellow, ETH Zurich, ETH AI Center
NLPAI in EducationEvaluationHuman-AI InteractionInterpretability
Yunsu Kim
Yunsu Kim
aiXplain, Inc.
Natural Language ProcessingMachine TranslationMachine Learning
Jungseul Ok
Jungseul Ok
Associate Professor, CSE/AI, POSTECH
Reinforcement LearningMachine Learning
G
Gary Geunbae Lee
Department of Computer Science and Engineering, POSTECH, Republic of Korea; Graduate School of Artificial Intelligence, POSTECH, Republic of Korea