🤖 AI Summary
This work addresses the challenge of modeling long-range spatiotemporal dynamics in high-dimensional 4D fMRI signals, which is severely constrained by GPU memory limitations and renders existing voxel-level approaches impractical for long time series. The authors propose the first method to leverage a pre-trained 2D natural image autoencoder for tokenizing 3D fMRI volumes into compact, continuous tokens, which are then processed by a lightweight Transformer encoder to enable efficient long-sequence modeling. By incorporating self-supervised masked token modeling, the approach substantially reduces computational and memory overhead while enhancing performance on downstream tasks. Evaluated on large-scale datasets including UK Biobank, Human Connectome Project (HCP), and ADHD-200, the method achieves significantly lower memory consumption and higher efficiency than state-of-the-art voxel-level models under identical input conditions.
📝 Abstract
Modeling long-range spatiotemporal dynamics in functional Magnetic Resonance Imaging (fMRI) remains a key challenge due to the high dimensionality of the four-dimensional signals. Prior voxel-based models, although demonstrating excellent performance and interpretation capabilities, are constrained by prohibitive memory demands and thus can only capture limited temporal windows. To address this, we propose TABLeT (Two-dimensionally Autoencoded Brain Latent Transformer), a novel approach that tokenizes fMRI volumes using a pre-trained 2D natural image autoencoder. Each 3D fMRI volume is compressed into a compact set of continuous tokens, enabling long-sequence modeling with a simple Transformer encoder with limited VRAM. Across large-scale benchmarks including the UK-Biobank (UKB), Human Connectome Project (HCP), and ADHD-200 datasets, TABLeT outperforms existing models in multiple tasks, while demonstrating substantial gains in computational and memory efficiency over the state-of-the-art voxel-based method given the same input. Furthermore, we develop a self-supervised masked token modeling approach to pre-train TABLeT, which improves the model's performance for various downstream tasks. Our findings suggest a promising approach for scalable and interpretable spatiotemporal modeling of brain activity. Our code is available at https://github.com/beotborry/TABLeT.