MEG-to-MEG Transfer Learning and Cross-Task Speech/Silence Detection with Limited Data

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of low data efficiency and cross-task decoding in magnetoencephalography (MEG)-based speech brain–computer interfaces by proposing a Conformer-based transfer learning framework. The model is pretrained on large-scale auditory data and fine-tuned with only five minutes of individual-specific data, enabling effective cross-task transfer between speech perception and production tasks. This work provides the first empirical evidence for shared neural representations between these two task types, overcoming the conventional reliance on task-specific motor signals. Experimental results demonstrate performance improvements of 1–4% in within-task decoding accuracy and 5–6% in cross-task scenarios. Notably, models trained on speech production data successfully decode passive auditory perception significantly above chance level, highlighting the framework’s robust generalization capability across distinct cognitive tasks.

Technology Category

Application Category

📝 Abstract
Data-efficient neural decoding is a central challenge for speech brain-computer interfaces. We present the first demonstration of transfer learning and cross-task decoding for MEG-based speech models spanning perception and production. We pre-train a Conformer-based model on 50 hours of single-subject listening data and fine-tune on just 5 minutes per subject across 18 participants. Transfer learning yields consistent improvements, with in-task accuracy gains of 1-4% and larger cross-task gains of up to 5-6%. Not only does pre-training improve performance within each task, but it also enables reliable cross-task decoding between perception and production. Critically, models trained on speech production decode passive listening above chance, confirming that learned representations reflect shared neural processes rather than task-specific motor activity.
Problem

Research questions and friction points this paper is trying to address.

MEG
speech decoding
transfer learning
cross-task
data efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

MEG
transfer learning
cross-task decoding
Conformer
speech brain-computer interface
🔎 Similar Papers
No similar papers found.
X
Xabier de Zuazo
HiTZ Center, University of the Basque Country – UPV/EHU, Spain
V
Vincenzo Verbeni
Basque Center on Cognition, Brain and Language – BCBL, Spain
Eva Navas
Eva Navas
University of the Basque Country
Speech synthesisspeaker diarization
Ibon Saratxaga
Ibon Saratxaga
University of the Basque Country (UPV/EHU)
Speechsound classification
M
Mathieu Bourguignon
Basque Center on Cognition, Brain and Language – BCBL, Spain
Nicola Molinaro
Nicola Molinaro
Basque center on Cognition, Brain and Language - Ikerbasque
CognitionNeural oscillationsLanguage disordersReadingSpeech