🤖 AI Summary
To address speaker dependency and poor cross-corpus generalizability in dysarthric speech severity assessment, this paper proposes DSSCNet—a deep neural network—and a cross-corpus transfer learning framework. DSSCNet integrates convolutional feature extraction, Squeeze-and-Excitation (SE) channel-wise attention, and residual connections to model fine-grained severity directly from Mel-spectrograms. To enhance robustness to unseen speakers, we further introduce a detection-task-oriented transfer fine-tuning strategy. Evaluated on the TORGO and UA-Speech corpora under the one-speaker-per-split (OSPS) and leave-one-speaker-out (LOSO) protocols, respectively, our method achieves state-of-the-art accuracy of 75.80% and 79.44%. This work is the first to jointly leverage SE attention and detection-based transfer learning for speaker-independent dysarthria severity classification, offering a clinically viable and generalizable solution for objective assessment.
📝 Abstract
Dysarthric speech severity classification is crucial for objective clinical assessment and progress monitoring in individuals with motor speech disorders. Although prior methods have addressed this task, achieving robust generalization in speaker-independent (SID) scenarios remains challenging. This work introduces DSSCNet, a novel deep neural architecture that combines Convolutional, Squeeze-Excitation (SE), and Residual network, helping it extract discriminative representations of dysarthric speech from mel spectrograms. The addition of SE block selectively focuses on the important features of the dysarthric speech, thereby minimizing loss and enhancing overall model performance. We also propose a cross-corpus fine-tuning framework for severity classification, adapted from detection-based transfer learning approaches. DSSCNet is evaluated on two benchmark dysarthric speech corpora: TORGO and UA-Speech under speaker-independent evaluation protocols: One-Speaker-Per-Severity (OSPS) and Leave-One-Speaker-Out (LOSO) protocols. DSSCNet achieves accuracies of 56.84% and 62.62% under OSPS and 63.47% and 64.18% under LOSO setting on TORGO and UA-Speech respectively outperforming existing state-of-the-art methods. Upon fine-tuning, the performance improves substantially, with DSSCNet achieving up to 75.80% accuracy on TORGO and 68.25% on UA-Speech in OSPS, and up to 77.76% and 79.44%, respectively, in LOSO. These results demonstrate the effectiveness and generalizability of DSSCNet for fine-grained severity classification across diverse dysarthric speech datasets.