🤖 AI Summary
To address dysarthric speech characterized by poor fluency and reduced emotional expressivity in post-stroke patients, this paper proposes a wearable intelligent larynx system. The system employs ultra-sensitive fabric strain sensors to simultaneously acquire laryngeal muscle vibrations and carotid pulse signals, integrated with a large language model (LLM) to enable token-level streaming speech decoding, context-aware real-time error correction, and sentence-level affective logic enhancement. Its core innovation lies in the first-of-its-kind “physiological signal + LLM agent” fusion paradigm, overcoming longstanding limitations in naturalness, fluency, and clinical deployability of conventional silent-speech interfaces. Evaluated on five patients, the system achieves a word error rate of 4.2% and a sentence error rate of 2.9%, while user satisfaction improves by 55%. These results demonstrate the efficacy and clinical potential of this portable, highly adaptive neurorehabilitation communication platform.
📝 Abstract
Wearable silent speech systems hold significant potential for restoring communication in patients with speech impairments. However, seamless, coherent speech remains elusive, and clinical efficacy is still unproven. Here, we present an AI-driven intelligent throat (IT) system that integrates throat muscle vibrations and carotid pulse signal sensors with large language model (LLM) processing to enable fluent, emotionally expressive communication. The system utilizes ultrasensitive textile strain sensors to capture high-quality signals from the neck area and supports token-level processing for real-time, continuous speech decoding, enabling seamless, delay-free communication. In tests with five stroke patients with dysarthria, IT's LLM agents intelligently corrected token errors and enriched sentence-level emotional and logical coherence, achieving low error rates (4.2% word error rate, 2.9% sentence error rate) and a 55% increase in user satisfaction. This work establishes a portable, intuitive communication platform for patients with dysarthria with the potential to be applied broadly across different neurological conditions and in multi-language support systems.