Temporal Context and Architecture: A Benchmark for Naturalistic EEG Decoding

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the interplay between model architecture and temporal context length in decoding naturalistic electroencephalography (EEG) signals, and its impact on performance and robustness. Leveraging the HBN movie-watching dataset, we evaluate five architectures—CNN, LSTM, EEGXF, S4, and S5—across temporal segments ranging from 8 to 128 seconds, and assess their real-world robustness under cross-frequency shifts, out-of-distribution tasks, and leave-one-subject-out settings. We uncover a novel efficiency–robustness trade-off between temporal context and architecture: S5 achieves 98.7% ± 0.6% accuracy at a 64-second context with only 1/20th the parameters of CNN, making it ideal for high-accuracy, low-overhead applications, whereas EEGXF demonstrates superior robustness under frequency shifts, favoring scenarios requiring conservative uncertainty estimation.

Technology Category

Application Category

📝 Abstract
We study how model architecture and temporal context interact in naturalistic EEG decoding. Using the HBN movie-watching dataset, we benchmark five architectures, CNN, LSTM, a stabilized Transformer (EEGXF), S4, and S5, on a 4-class task across segment lengths from 8s to 128s. Accuracy improves with longer context: at 64s, S5 reaches 98.7%+/-0.6 and CNN 98.3%+/-0.3, while S5 uses ~20x fewer parameters than CNN. To probe real-world robustness, we evaluate zero-shot cross-frequency shifts, cross-task OOD inputs, and leave-one-subject-out generalization. S5 achieves stronger cross-subject accuracy but makes over-confident errors on OOD tasks; EEGXF is more conservative and stable under frequency shifts, though less calibrated in-distribution. These results reveal a practical efficiency-robustness trade-off: S5 for parameter-efficient peak accuracy; EEGXF when robustness and conservative uncertainty are critical.
Problem

Research questions and friction points this paper is trying to address.

naturalistic EEG decoding
temporal context
model architecture
robustness
cross-subject generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

EEG decoding
temporal context
model architecture
robustness evaluation
parameter efficiency
🔎 Similar Papers
No similar papers found.