Exploring EEG and Eye Movement Fusion for Multi-Class Target RSVP-BCI

๐Ÿ“… 2025-01-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current RSVP-BCI systems rely solely on unimodal EEG for multi-class target decoding, but suffer from limited classification accuracy due to low discriminability among event-related potentials (ERPs) elicited by different target classes. To address this, we propose MTREE-Netโ€”a novel deep learning framework that pioneers the integration of eye-tracking signals with EEG for multi-class RSVP decoding. MTREE-Net incorporates three key components: (1) dual complementary feature enhancement modules for cross-modal representation learning; (2) a theory-driven dynamic reweighting fusion strategy for adaptive modality weighting; and (3) a hierarchical classifier knowledge transfer mechanism enabling cross-task knowledge reuse. Extensive experiments on a large-scale open-source dataset comprising 43 subjects demonstrate that MTREE-Net significantly outperforms state-of-the-art methods, achieving substantial improvements in multi-class target identification accuracy. These results empirically validate the critical, performance-enhancing role of eye-movement signals in RSVP-based multi-class decoding.

Technology Category

Application Category

๐Ÿ“ Abstract
Rapid Serial Visual Presentation (RSVP)-based Brain-Computer Interfaces (BCIs) facilitate high-throughput target image detection by identifying event-related potentials (ERPs) evoked in EEG signals. The RSVP-BCI systems effectively detect single-class targets within a stream of images but have limited applicability in scenarios that require detecting multiple target categories. Multi-class RSVP-BCI systems address this limitation by simultaneously identifying the presence of a target and distinguishing its category. However, existing multi-class RSVP decoding algorithms predominantly rely on single-modality EEG decoding, which restricts their performance improvement due to the high similarity between ERPs evoked by different target categories. In this work, we introduce eye movement (EM) modality into multi-class RSVP decoding and explore EEG and EM fusion to enhance decoding performance. First, we design three independent multi-class target RSVP tasks and build an open-source dataset comprising EEG and EM signals from 43 subjects. Then, we propose the Multi-class Target RSVP EEG and EM fusion Network (MTREE-Net) to enhance multi-class RSVP decoding. Specifically, a dual-complementary module is proposed to strengthen the differentiation of uni-modal features across categories. To improve multi-modal fusion performance, we adopt a dynamic reweighting fusion strategy guided by theoretically derived modality contribution ratios. Furthermore, we reduce the misclassification of non-target samples through knowledge transfer between two hierarchical classifiers. Extensive experiments demonstrate the feasibility of integrating EM signals into multi-class RSVP decoding and highlight the superior performance of MTREE-Net compared to existing RSVP decoding methods. The proposed MTREE-Net and open-source dataset provide a promising framework for developing practical multi-class RSVP-BCI systems.
Problem

Research questions and friction points this paper is trying to address.

RSVP-BCIs
multi-object recognition
EEG decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

EEG-EyeTracking Fusion
MTREE-Net Architecture
Multi-target Image Recognition
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xujin Li
Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Future Technology, University of Chinese Academy of Sciences (UCAS), Beijing, 100049, China
W
Wei Wei
Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
K
Kun Zhao
Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS), Beijing, 100049, China
J
Jiayu Mao
Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS), Beijing, 100049, China
Yizhuo Lu
Yizhuo Lu
ไธญ็ง‘้™ข่‡ชๅŠจๅŒ–็ ”็ฉถๆ‰€
ไบบๅทฅๆ™บ่ƒฝใ€็ฅž็ป็ผ–่งฃ็ 
Shuang Qiu
Shuang Qiu
City University of Hong Kong
Reinforcement LearningAgentic AILarge Language ModelsEmbodied AI
Huiguang He
Huiguang He
Institute of Automation, Chinese Academy of Scineces
Artificial Intelligencemedical image processingBrain Computer Interface