VIViT: Variable-Input Vision Transformer Framework for 3D MR Image Segmentation

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of poor adaptation between fixed-modality pretraining and downstream segmentation tasks in multi-protocol MRI—where contrast combinations vary across protocols—this paper proposes the first variable-modality Vision Transformer (ViT) framework capable of processing arbitrary subsets of input contrasts. Departing from rigid modality constraints, our framework introduces dynamic modality embeddings, contrast-agnostic masked modeling, and a multi-task decoder to enable flexible self-supervised pretraining and 3D voxel-wise segmentation fine-tuning. Evaluated on real-world heterogeneous MR data, it demonstrates significantly improved generalizability and cross-protocol knowledge transfer. On ischemic stroke and brain tumor segmentation benchmarks, it achieves Dice scores of 0.624 and 0.883, respectively—substantially outperforming both CNN-based and standard ViT baselines.

Technology Category

Application Category

📝 Abstract
Self-supervised pretrain techniques have been widely used to improve the downstream tasks' performance. However, real-world magnetic resonance (MR) studies usually consist of different sets of contrasts due to different acquisition protocols, which poses challenges for the current deep learning methods on large-scale pretrain and different downstream tasks with different input requirements, since these methods typically require a fixed set of input modalities or, contrasts. To address this challenge, we propose variable-input ViT (VIViT), a transformer-based framework designed for self-supervised pretraining and segmentation finetuning for variable contrasts in each study. With this ability, our approach can maximize the data availability in pretrain, and can transfer the learned knowledge from pretrain to downstream tasks despite variations in input requirements. We validate our method on brain infarct and brain tumor segmentation, where our method outperforms current CNN and ViT-based models with a mean Dice score of 0.624 and 0.883 respectively. These results highlight the efficacy of our design for better adaptability and performance on tasks with real-world heterogeneous MR data.
Problem

Research questions and friction points this paper is trying to address.

Handles variable MR image contrasts for segmentation
Enables self-supervised pretraining with diverse input modalities
Improves adaptability for real-world heterogeneous MR data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variable-input ViT for diverse MR contrasts
Self-supervised pretrain maximizes data availability
Transformer adapts to varying input requirements
🔎 Similar Papers
No similar papers found.
B
Badhan Kumar Das
Siemens Healthineers, Erlangen, Germany
A
Ajay Singh
Siemens Healthineers, Erlangen, Germany
G
Gengyan Zhao
Siemens Healthineers, Princeton, New Jersey, United States
H
Han Liu
Siemens Healthineers, Princeton, New Jersey, United States
T
Thomas J. Re
Siemens Healthineers, Princeton, New Jersey, United States
Dorin Comaniciu
Dorin Comaniciu
Siemens Healthineers
Medical Image AnalysisMedical Image ComputingImage-Guided InterventionsArtificial IntelligenceComputer Vision
Eli Gibson
Eli Gibson
Siemens Healthineers
Biomedical Image Segmentation and RegistrationDeep learningHistologyProstatePancreas and Liver Cancer
A
Andreas Maier
Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany