🤖 AI Summary
To address challenges in multi-channel intracranial electroencephalography (iEEG) seizure classification—including variable channel counts across subjects, poor cross-subject generalization, and insufficient long-term temporal modeling—this paper proposes a Channel-Adaptive (CA) architecture. Methodologically, it introduces (1) a novel channel-adaptive fusion mechanism coupled with vector symbolic space reconstruction, enabling robust processing of arbitrary channel numbers; (2) a long-horizon memory accumulation classifier that extends effective contextual modeling to the clinically required 2-minute window; and (3) a cross-subject pretraining followed by single-seizure fine-tuning paradigm. Evaluated on CA-EEGWaveNet and CA-EEGNet, the framework achieves median F1-scores of 0.78 and 0.79, surpassing full-dataset baselines (0.76/0.74), while reducing fine-tuning time to one-fifth. The CA framework enables unified modeling across heterogeneous subjects and high-performance personalized adaptation.
📝 Abstract
Objective: We develop a channel-adaptive (CA) architecture that seamlessly processes multi-variate time-series with an arbitrary number of channels, and in particular intracranial electroencephalography (iEEG) recordings. Methods: Our CA architecture first processes the iEEG signal using state-of-the-art models applied to each single channel independently. The resulting features are then fused using a vector-symbolic algorithm which reconstructs the spatial relationship using a trainable scalar per channel. Finally, the fused features are accumulated in a long-term memory of up to 2 minutes to perform the classification. Each CA-model can then be pre-trained on a large corpus of iEEG recordings from multiple heterogeneous subjects. The pre-trained model is personalized to each subject via a quick fine-tuning routine, which uses equal or lower amounts of data compared to existing state-of-the-art models, but requiring only 1/5 of the time. Results: We evaluate our CA-models on a seizure detection task both on a short-term (~20 hours) and a long-term (~2500 hours) dataset. In particular, our CA-EEGWaveNet is trained on a single seizure of the tested subject, while the baseline EEGWaveNet is trained on all but one. Even in this challenging scenario, our CA-EEGWaveNet surpasses the baseline in median F1-score (0.78 vs 0.76). Similarly, CA-EEGNet based on EEGNet, also surpasses its baseline in median F1-score (0.79 vs 0.74). Conclusion and significance: Our CA-model addresses two issues: first, it is channel-adaptive and can therefore be trained across heterogeneous subjects without loss of performance; second, it increases the effective temporal context size to a clinically-relevant length. Therefore, our model is a drop-in replacement for existing models, bringing better characteristics and performance across the board.