Brain-OF: An Omnifunctional Foundation Model for fMRI, EEG and MEG

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Brain-OF, the first universal multimodal foundation model for brain signals, addressing the limitation of existing models that are confined to single modalities and thus fail to effectively leverage the complementary spatiotemporal characteristics and scale advantages of fMRI, EEG, and MEG. Brain-OF operates within a unified framework supporting both unimodal and multimodal inputs and introduces an Any-Resolution Neural Signal Sampler to align heterogeneous signals with varying resolutions. The architecture integrates a DINT attention mechanism with a hybrid sparse mixture-of-experts structure combining shared and routed experts, and employs a joint time-frequency domain masked pretraining strategy. Pretrained on a large-scale corpus encompassing approximately 40 datasets, Brain-OF achieves the first successful joint modeling of all three modalities and significantly outperforms current methods across multiple downstream neuroscience tasks, demonstrating the efficacy of multimodal fusion and dual-domain pretraining.

Technology Category

Application Category

📝 Abstract
Brain foundation models have achieved remarkable advances across a wide range of neuroscience tasks. However, most existing models are limited to a single functional modality, restricting their ability to exploit complementary spatiotemporal dynamics and the collective data scale across imaging techniques. To address this limitation, we propose Brain-OF, the first omnifunctional brain foundation model jointly pretrained on fMRI, EEG and MEG, capable of handling both unimodal and multimodal inputs within a unified framework. To reconcile heterogeneous spatiotemporal resolutions, we introduce the Any-Resolution Neural Signal Sampler, which projects diverse brain signals into a shared semantic space.To further manage semantic shifts, the Brain-OF backbone integrates DINT attention with a Sparse Mixture of Experts, where shared experts capture modality-invariant representations and routed experts specialize in modality-specific semantics. Furthermore, we propose Masked Temporal-Frequency Modeling, a dual-domain pretraining objective that jointly reconstructs brain signals in both the time and frequency domains. Brain-OF is pretrained on a large-scale corpus comprising around 40 datasets and demonstrates superior performance across diverse downstream tasks, highlighting the benefits of joint multimodal integration and dual-domain pretraining.
Problem

Research questions and friction points this paper is trying to address.

multimodal brain imaging
foundation model
fMRI
EEG
MEG
Innovation

Methods, ideas, or system contributions that make the work stand out.

Omnifunctional Foundation Model
Multimodal Brain Imaging
Any-Resolution Neural Signal Sampler
Sparse Mixture of Experts
Masked Temporal-Frequency Modeling
🔎 Similar Papers
No similar papers found.
H
Hanning Guo
INM-4, Forschungszentrum Jülich, Germany; Department of Computer Science, RWTH Aachen University, Germany
F
Farah Abdellatif
INM-4, Forschungszentrum Jülich, Germany; Department of Computer Science, RWTH Aachen University, Germany
H
Hanwen Bi
INM-7, Forschungszentrum Jülich, Germany; Institute of Systems Neuroscience, Heinrich Heine University, Germany
A
Andrei Galbenus
INM-4, Forschungszentrum Jülich, Germany; Department of Computer Science, RWTH Aachen University, Germany
J
Jon. N. Shah
INM-4, Forschungszentrum Jülich, Germany; Department of Neurology, RWTH Aachen University, Germany; JARA-BRAIN-Translational Medicine, Germany; INM–11, JARA, Forschungszentrum Jülich, Germany
Abigail Morrison
Abigail Morrison
Jülich Research Center
Computational NeuroscienceHPCNeuroinformaticsSimulation Technology
J
Jürgen Dammers
INM-4, Forschungszentrum Jülich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Germany