🤖 AI Summary
This work proposes MyoText, a novel framework for efficient, keyboard-free text entry from surface electromyography (sEMG) signals, tailored for wearable and mixed-reality applications. The approach employs a hierarchical decoding strategy: first, a CNN-BiLSTM-Attention model maps multi-channel sEMG signals to a physiologically inspired finger activation representation; second, ergonomic typing priors guide character inference; and third, a fine-tuned T5 language model reconstructs complete sentences. MyoText introduces finger movements as an interpretable intermediate representation, uniquely bridging neuromuscular signals with language modeling in a modular and explainable input pipeline. Evaluated on 30 users, the system achieves 85.4% finger classification accuracy, 5.4% character error rate, and 6.5% word error rate, significantly outperforming existing methods.
📝 Abstract
Surface electromyography (sEMG) provides a direct neural interface for decoding muscle activity and offers a promising foundation for keyboard-free text input in wearable and mixed-reality systems. Previous sEMG-to-text studies mainly focused on recognizing letters directly from sEMG signals, forming an important first step toward translating muscle activity into text. Building on this foundation, we present MyoText, a hierarchical framework that decodes sEMG signals to text through physiologically grounded intermediate stages. MyoText first classifies finger activations from multichannel sEMG using a CNN-BiLSTM-Attention model, applies ergonomic typing priors to infer letters, and reconstructs full sentences with a fine-tuned T5 transformer. This modular design mirrors the natural hierarchy of typing, linking muscle intent to language output and reducing the search space for decoding. Evaluated on 30 users from the emg2qwerty dataset, MyoText outperforms baselines by achieving 85.4% finger-classification accuracy, 5.4% character error rate (CER), and 6.5% word error rate (WER). Beyond accuracy gains, this methodology establishes a principled pathway from neuromuscular signals to text, providing a blueprint for virtual and augmented-reality typing interfaces that operate entirely without physical keyboards. By integrating ergonomic structure with transformer-based linguistic reasoning, MyoText advances the feasibility of seamless, wearable neural input for future ubiquitous computing environments.