🤖 AI Summary
Existing fMRI-based visual decoding methods heavily rely on manually defined regions of interest (ROIs), rendering them noise-sensitive, poorly generalizable across subjects, and impractical for low-data regimes or novice users. To address these limitations, we propose the Trainable ROI (TROI) framework, introducing a data-driven, two-stage voxel selection mechanism: (i) sparse mask learning coupled with low-pass filtering for efficient and robust ROI localization; and (ii) a learning-rate rollback strategy enabling subject-specific adaptation via fine-tuning only the input layer—eliminating the need for full model retraining. Under few-shot settings, TROI significantly outperforms the annotation-dependent MindEye2, achieving consistent improvements in voxel selection accuracy, reconstructed image fidelity, and visual retrieval precision. By decoupling ROI definition from manual annotation and enabling lightweight, subject-agnostic adaptation, TROI establishes a new paradigm for scalable, generalizable, and user-friendly fMRI decoding.
📝 Abstract
fMRI (functional Magnetic Resonance Imaging) visual decoding involves decoding the original image from brain signals elicited by visual stimuli. This often relies on manually labeled ROIs (Regions of Interest) to select brain voxels. However, these ROIs can contain redundant information and noise, reducing decoding performance. Additionally, the lack of automated ROI labeling methods hinders the practical application of fMRI visual decoding technology, especially for new subjects. This work presents TROI (Trainable Region of Interest), a novel two-stage, data-driven ROI labeling method for cross-subject fMRI decoding tasks, particularly when subject samples are limited. TROI leverages labeled ROIs in the dataset to pretrain an image decoding backbone on a cross-subject dataset, enabling efficient optimization of the input layer for new subjects without retraining the entire model from scratch. In the first stage, we introduce a voxel selection method that combines sparse mask training and low-pass filtering to quickly generate the voxel mask and determine input layer dimensions. In the second stage, we apply a learning rate rewinding strategy to fine-tune the input layer for downstream tasks. Experimental results on the same small sample dataset as the baseline method for brain visual retrieval and reconstruction tasks show that our voxel selection method surpasses the state-of-the-art method MindEye2 with an annotated ROI mask.