Confidence-Based Self-Training for EMG-to-Speech: Leveraging Synthetic EMG for Robust Modeling

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of paired electromyography (EMG)-speech data for voiced EMG-to-speech reconstruction—leading to poor model generalization—this paper proposes CoM2S, a phoneme-level confidence-guided multi-speaker self-training framework, and introduces Libri-EMG, the first open-source, temporally aligned, multi-speaker EMG-speech dataset. Our key contributions are: (1) a novel phoneme-level confidence filtering mechanism that leverages a pre-trained EMG-to-speech generative model to produce high-fidelity synthetic training data; and (2) the first large-scale, fully annotated, cross-speaker EMG-speech benchmark dataset. Experiments demonstrate significant improvements in phoneme accuracy, reduced phonemic confusions, and a substantial decrease in word error rate. Both the codebase and the Libri-EMG dataset will be publicly released to advance research in EMG-to-speech synthesis.

Technology Category

Application Category

📝 Abstract
Voiced Electromyography (EMG)-to-Speech (V-ETS) models reconstruct speech from muscle activity signals, facilitating applications such as neurolaryngologic diagnostics. Despite its potential, the advancement of V-ETS is hindered by a scarcity of paired EMG-speech data. To address this, we propose a novel Confidence-based Multi-Speaker Self-training (CoM2S) approach, along with a newly curated Libri-EMG dataset. This approach leverages synthetic EMG data generated by a pre-trained model, followed by a proposed filtering mechanism based on phoneme-level confidence to enhance the ETS model through the proposed self-training techniques. Experiments demonstrate our method improves phoneme accuracy, reduces phonological confusion, and lowers word error rate, confirming the effectiveness of our CoM2S approach for V-ETS. In support of future research, we will release the codes and the proposed Libri-EMG dataset-an open-access, time-aligned, multi-speaker voiced EMG and speech recordings.
Problem

Research questions and friction points this paper is trying to address.

Addressing scarcity of paired EMG-speech data for V-ETS models
Improving EMG-to-Speech accuracy with synthetic data and confidence filtering
Reducing word error rate in voiced EMG-to-speech reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confidence-based multi-speaker self-training approach
Synthetic EMG data from pre-trained model
Phoneme-level confidence filtering mechanism
🔎 Similar Papers
No similar papers found.
X
Xiaodan Chen
ETIS, CY Cergy Paris Université - ENSEA – CNRS, UMR 8051, 2 Av. Adolphe Chauvin, 95300 Pontoise, France; Institute for Infocomm Research, A*STAR, 1 Fusionopolis Way, #20-10 Connexis North Tower, Singapore 138632; IPAL (International Research Laboratory on Artificial Intelligence), CNRS, Singapore
Xiaoxue Gao
Xiaoxue Gao
Research Scientist, I2R, A*STAR; National University of Singapore; IEEE Senior Member
Generative AISpeechLarge language models
M
Mathias Quoy
ETIS, CY Cergy Paris Université - ENSEA – CNRS, UMR 8051, 2 Av. Adolphe Chauvin, 95300 Pontoise, France; Institute for Infocomm Research, A*STAR, 1 Fusionopolis Way, #20-10 Connexis North Tower, Singapore 138632; IPAL (International Research Laboratory on Artificial Intelligence), CNRS, Singapore
A
Alex Pitti
ETIS, CY Cergy Paris Université - ENSEA – CNRS, UMR 8051, 2 Av. Adolphe Chauvin, 95300 Pontoise, France; Institute for Infocomm Research, A*STAR, 1 Fusionopolis Way, #20-10 Connexis North Tower, Singapore 138632; IPAL (International Research Laboratory on Artificial Intelligence), CNRS, Singapore
N
Nancy F.Chen
Institute for Infocomm Research, A*STAR, 1 Fusionopolis Way, #20-10 Connexis North Tower, Singapore 138632; IPAL (International Research Laboratory on Artificial Intelligence), CNRS, Singapore