EEGDM: Learning EEG Representation with Latent Diffusion Model

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised EEG representation learning methods (e.g., EEGPT, LaBraM) rely on simplistic masked reconstruction objectives, limiting their ability to capture rich semantics and complex spatiotemporal dynamics—especially under low-data and cross-task settings. To address this, we introduce latent diffusion models (LDMs) into self-supervised EEG representation learning for the first time, adopting signal generation as the pretraining objective to construct a semantically rich and structurally compact latent space. We propose a conditional LDM framework wherein an EEG encoder extracts compact representations from both raw signals and channel-augmented variants to serve as conditioning inputs for the diffusion process. Experiments demonstrate high-fidelity EEG reconstruction from minimal training data and achieve state-of-the-art or competitive performance on downstream tasks—including motor imagery and emotion recognition—significantly enhancing representation generalizability and robustness.

Technology Category

Application Category

📝 Abstract
While electroencephalography (EEG) signal analysis using deep learning has shown great promise, existing approaches still face significant challenges in learning generalizable representations that perform well across diverse tasks, particularly when training data is limited. Current EEG representation learning methods including EEGPT and LaBraM typically rely on simple masked reconstruction objective, which may not fully capture the rich semantic information and complex patterns inherent in EEG signals. In this paper, we propose EEGDM, a novel self-supervised EEG representation learning method based on the latent diffusion model, which leverages EEG signal generation as a self-supervised objective, turning the diffusion model into a strong representation learner capable of capturing EEG semantics. EEGDM incorporates an EEG encoder that distills EEG signals and their channel augmentations into a compact representation, acting as conditional information to guide the diffusion model for generating EEG signals. This design endows EEGDM with a compact latent space, which not only offers ample control over the generative process but also can be leveraged for downstream tasks. Experimental results show that EEGDM (1) can reconstruct high-quality EEG signals, (2) effectively learns robust representations, and (3) achieves competitive performance with modest pre-training data size across diverse downstream tasks, underscoring its generalizability and practical utility.
Problem

Research questions and friction points this paper is trying to address.

Learning generalizable EEG representations across diverse tasks
Overcoming limitations of simple masked reconstruction objectives
Capturing rich semantic information in EEG signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent diffusion model for EEG representation
Self-supervised learning via EEG generation
Compact latent space for downstream tasks
🔎 Similar Papers
No similar papers found.