🤖 AI Summary
To address low classification accuracy in few-shot intracranial electroencephalography (iEEG) decoding—caused by limited subjects and short recordings—this paper proposes a subject-specific machine learning framework. Methodologically, it introduces a novel decoding paradigm that fuses multi-regional electrode signals (“combined-channel” mode) while explicitly incorporating spatial information. We extract 18-dimensional time-frequency and statistical features and employ an ensemble of classifiers, including Random Forest and XGBoost, supporting both “best-channel” and “combined-channel” inference modes. Our key contribution is the physiologically interpretable, cross-regional collaborative modeling enabled by quantifying task-relevant functional contributions of distinct brain areas. Evaluated on three benchmark datasets—Music Reconstruction, Audio-Visual, and AJILE12—the framework achieves a maximum F1-score of 0.84 ± 0.08. Critically, the combined-channel mode consistently outperforms the best-channel baseline, demonstrating the efficacy and robustness of spatial integration for enhancing few-shot iEEG decoding performance.
📝 Abstract
Intracranial EEG (iEEG) recording, characterized by high spatial and temporal resolution and superior signal-to-noise ratio (SNR), enables the development of precise brain-computer interface (BCI) systems for neural decoding. However, the invasive nature of the procedure significantly limits the availability of iEEG datasets in terms of both the number of participants and the duration of recorded sessions. To address this limitation, we propose a single-participant machine learning model optimized for decoding iEEG signals. The model employs 18 key features and operates in two modes: best channel and combined channel. The combined channel mode integrates spatial information from multiple brain regions, leading to superior classification performance. Evaluations across three datasets -- Music Reconstruction, Audio Visual, and AJILE12 -- demonstrate that the combined channel mode consistently outperforms the best channel mode across all classifiers. In the best-performing cases, Random Forest achieved an F1 score of 0.81 +/- 0.05 in the Music Reconstruction dataset and 0.82 +/- 0.10 in the Audio Visual dataset, while XGBoost achieved an F1 score of 0.84 +/- 0.08 in the AJILE12 dataset. Furthermore, the analysis of brain region contributions in the combined channel mode revealed that the model identifies relevant brain regions aligned with physiological expectations for each task and effectively combines data from electrodes in these regions to achieve high performance. These findings highlight the potential of integrating spatial information across brain regions to improve task decoding, offering new avenues for advancing BCI systems and neurotechnological applications.