🤖 AI Summary
To address the challenge of poor adaptation between fixed-modality pretraining and downstream segmentation tasks in multi-protocol MRI—where contrast combinations vary across protocols—this paper proposes the first variable-modality Vision Transformer (ViT) framework capable of processing arbitrary subsets of input contrasts. Departing from rigid modality constraints, our framework introduces dynamic modality embeddings, contrast-agnostic masked modeling, and a multi-task decoder to enable flexible self-supervised pretraining and 3D voxel-wise segmentation fine-tuning. Evaluated on real-world heterogeneous MR data, it demonstrates significantly improved generalizability and cross-protocol knowledge transfer. On ischemic stroke and brain tumor segmentation benchmarks, it achieves Dice scores of 0.624 and 0.883, respectively—substantially outperforming both CNN-based and standard ViT baselines.
📝 Abstract
Self-supervised pretrain techniques have been widely used to improve the downstream tasks' performance. However, real-world magnetic resonance (MR) studies usually consist of different sets of contrasts due to different acquisition protocols, which poses challenges for the current deep learning methods on large-scale pretrain and different downstream tasks with different input requirements, since these methods typically require a fixed set of input modalities or, contrasts. To address this challenge, we propose variable-input ViT (VIViT), a transformer-based framework designed for self-supervised pretraining and segmentation finetuning for variable contrasts in each study. With this ability, our approach can maximize the data availability in pretrain, and can transfer the learned knowledge from pretrain to downstream tasks despite variations in input requirements. We validate our method on brain infarct and brain tumor segmentation, where our method outperforms current CNN and ViT-based models with a mean Dice score of 0.624 and 0.883 respectively. These results highlight the efficacy of our design for better adaptability and performance on tasks with real-world heterogeneous MR data.