Large Language Models for EEG: A Comprehensive Survey and Taxonomy

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the fundamental challenges at the intersection of large language models (LLMs) and electroencephalography (EEG). We propose the first systematic taxonomy—spanning representation learning, EEG-to-language decoding, cross-modal generation, and clinical applications—to structure this emerging domain. Methodologically, we introduce a novel LLM-based paradigm specifically designed for EEG: it unifies modeling principles and evaluation protocols, bridges theoretical gaps between natural language processing and neural signal analysis, and leverages Transformer architectures enhanced with fine-tuning, few-shot, and zero-shot learning strategies tailored to EEG’s temporal dynamics and multimodal alignment requirements. Our contribution includes a comprehensive review of over 120 state-of-the-art works, a reproducible methodology framework, and an open-source resource guide. This work establishes a foundational toolkit for neural decoding, brain–computer interfaces, and clinical decision support systems.

Technology Category

Application Category

📝 Abstract
The growing convergence between Large Language Models (LLMs) and electroencephalography (EEG) research is enabling new directions in neural decoding, brain-computer interfaces (BCIs), and affective computing. This survey offers a systematic review and structured taxonomy of recent advancements that utilize LLMs for EEG-based analysis and applications. We organize the literature into four domains: (1) LLM-inspired foundation models for EEG representation learning, (2) EEG-to-language decoding, (3) cross-modal generation including image and 3D object synthesis, and (4) clinical applications and dataset management tools. The survey highlights how transformer-based architectures adapted through fine-tuning, few-shot, and zero-shot learning have enabled EEG-based models to perform complex tasks such as natural language generation, semantic interpretation, and diagnostic assistance. By offering a structured overview of modeling strategies, system designs, and application areas, this work serves as a foundational resource for future work to bridge natural language processing and neural signal analysis through language models.
Problem

Research questions and friction points this paper is trying to address.

LLMs for EEG-based neural decoding and BCIs
Survey of LLM applications in EEG analysis
Bridging NLP and neural signal analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-inspired foundation models for EEG
EEG-to-language decoding techniques
Cross-modal generation with EEG
🔎 Similar Papers
No similar papers found.
N
Naseem Babu
Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, Patna, India, 801106
Jimson Mathew
Jimson Mathew
IIT patna
Reliability AnalysisHigh Performance ComputingML/DL learning/Computer VisionTime Series
A
A. Vinod
Infocomm Technology Cluster, Singapore Institute of Technology, 10 Dover Drive, Singapore 138683