RatioWaveNet: A Learnable RDWT Front-End for Robust and Interpretable EEG Motor-Imagery Classification

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor robustness for challenging subjects and weak cross-subject generalization in non-invasive motor imagery brain–computer interfaces (MI-BCIs) based on EEG decoding, this paper proposes a learnable Rational Dilated Wavelet Transform (RDWT) frontend that enables shift-invariant, multi-resolution subband decomposition. Integrated with grouped 1D convolutions, multi-kernel convolutions, and a grouped query attention encoder, it forms an efficient temporal modeling architecture. The method significantly enhances discriminability for the most difficult subjects: on the BCI-IV-2a/2b datasets, worst-subject accuracy improves by up to 2.54 percentage points across five repeated experiments, while average performance increases stably with manageable computational overhead. The core innovation lies in embedding differentiable wavelet analysis into an end-to-end deep learning framework—thereby jointly achieving interpretability, robustness, and efficiency.

Technology Category

Application Category

📝 Abstract
Brain-computer interfaces (BCIs) based on motor imagery (MI) translate covert movement intentions into actionable commands, yet reliable decoding from non-invasive EEG remains challenging due to nonstationarity, low SNR, and subject variability. We present RatioWaveNet, which augments a strong temporal CNN-Transformer backbone (TCFormer) with a trainable, Rationally-Dilated Wavelet Transform (RDWT) front end. The RDWT performs an undecimated, multi-resolution subband decomposition that preserves temporal length and shift-invariance, enhancing sensorimotor rhythms while mitigating jitter and mild artifacts; subbands are fused via lightweight grouped 1-D convolutions and passed to a multi-kernel CNN for local temporal-spatial feature extraction, a grouped-query attention encoder for long-range context, and a compact TCN head for causal temporal integration. Our goal is to test whether this principled wavelet front end improves robustness precisely where BCIs typically fail - on the hardest subjects - and whether such gains persist on average across seeds under both intra- and inter-subject protocols. On BCI-IV-2a and BCI-IV-2b, across five seeds, RatioWaveNet improves worst-subject accuracy over the Transformer backbone by +0.17 / +0.42 percentage points (Sub-Dependent / LOSO) on 2a and by +1.07 / +2.54 percentage points on 2b, with consistent average-case gains and modest computational overhead. These results indicate that a simple, trainable wavelet front end is an effective plug-in to strengthen Transformer-based BCIs, improving worst-case reliability without sacrificing efficiency.
Problem

Research questions and friction points this paper is trying to address.

Improving EEG motor-imagery classification robustness for challenging subjects
Enhancing sensorimotor rhythms while mitigating artifacts and jitter
Addressing nonstationarity and low signal-to-noise ratio in EEG signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trainable wavelet front end for EEG signal enhancement
Multi-kernel CNN with grouped-query attention encoder
Causal temporal integration using compact TCN head