The role of data partitioning on the performance of EEG-based deep learning models in supervised cross-subject analysis: a preliminary study

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In EEG-based deep learning for cross-subject analysis, inconsistent data partitioning and cross-validation (CV) protocols frequently induce data leakage, performance overestimation, and non-comparable results. To address this, we conduct a large-scale empirical study—exceeding 100,000 model trainings—evaluating four state-of-the-art architectures (ShallowConvNet, EEGNet, DeepConvNet, Temporal ResNet) across three clinical tasks (BCI, Parkinson’s disease, Alzheimer’s disease) under five distinct CV settings. We quantitatively establish, for the first time, that subject-wise data splitting is a necessary condition for valid cross-subject evaluation. Moreover, nested leave-one-subject-out (N-LNSO) CV effectively prevents data leakage, mitigates overfitting, substantially improves model reliability, and eliminates the bias of conventional non-nested CV toward larger models. Based on these findings, we propose a reproducible EEG model evaluation protocol, establishing a methodological benchmark for the field.

Technology Category

Application Category

📝 Abstract
Deep learning is significantly advancing the analysis of electroencephalography (EEG) data by effectively discovering highly nonlinear patterns within the signals. Data partitioning and cross-validation are crucial for assessing model performance and ensuring study comparability, as they can produce varied results and data leakage due to specific signal properties (e.g., biometric). Such variability leads to incomparable studies and, increasingly, overestimated performance claims, which are detrimental to the field. Nevertheless, no comprehensive guidelines for proper data partitioning and cross-validation exist in the domain, nor is there a quantitative evaluation of their impact on model accuracy, reliability, and generalizability. To assist researchers in identifying optimal experimental strategies, this paper thoroughly investigates the role of data partitioning and cross-validation in evaluating EEG deep learning models. Five cross-validation settings are compared across three supervised cross-subject classification tasks (BCI, Parkinson's, and Alzheimer's disease detection) and four established architectures of increasing complexity (ShallowConvNet, EEGNet, DeepConvNet, and Temporal-based ResNet). The comparison of over 100,000 trained models underscores, first, the importance of using subject-based cross-validation strategies for evaluating EEG deep learning models, except when within-subject analyses are acceptable (e.g., BCI). Second, it highlights the greater reliability of nested approaches (N-LNSO) compared to non-nested counterparts, which are prone to data leakage and favor larger models overfitting to validation data. In conclusion, this work provides EEG deep learning researchers with an analysis of data partitioning and cross-validation and offers guidelines to avoid data leakage, currently undermining the domain with potentially overestimated performance claims.
Problem

Research questions and friction points this paper is trying to address.

Evaluates impact of data partitioning on EEG deep learning model performance
Addresses lack of guidelines for cross-validation in EEG-based studies
Compares cross-validation strategies to prevent data leakage and overfitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subject-based cross-validation for EEG models
Nested cross-validation to prevent data leakage
Comparison of 100,000 models across tasks
🔎 Similar Papers