๐ค AI Summary
Existing fMRI-based visual decoding methods rely on subject-specific training, suffering from poor generalizability and scalability. To address this, we propose VoxelFormerโa lightweight cross-subject decoding framework. First, it compresses high-dimensional voxel sequences via Token Merging (ToMer); then, a query-driven Q-Former generates fixed-dimensional neural representations explicitly aligned to the CLIP image embedding space, enabling efficient and semantically consistent mapping from fMRI signals to visual reconstructions. With significantly fewer parameters than state-of-the-art (SOTA) methods, VoxelFormer achieves competitive image retrieval performance on the 7T Natural Scenes Dataset. Crucially, it is the first method to demonstrate effective and scalable multi-subject visual reconstruction without incurring substantial parameter overhead, thereby validating both the feasibility and practicality of generalizable fMRI-to-image decoding.
๐ Abstract
Recent advances in fMRI-based visual decoding have enabled compelling reconstructions of perceived images. However, most approaches rely on subject-specific training, limiting scalability and practical deployment. We introduce extbf{VoxelFormer}, a lightweight transformer architecture that enables multi-subject training for visual decoding from fMRI. VoxelFormer integrates a Token Merging Transformer (ToMer) for efficient voxel compression and a query-driven Q-Former that produces fixed-size neural representations aligned with the CLIP image embedding space. Evaluated on the 7T Natural Scenes Dataset, VoxelFormer achieves competitive retrieval performance on subjects included during training with significantly fewer parameters than existing methods. These results highlight token merging and query-based transformers as promising strategies for parameter-efficient neural decoding.