π€ AI Summary
To address the high computational cost and reliance on large-scale annotated data in training 3D medical volumetric foundation models (e.g., for MRI), this paper introduces Raptor: a fully training-free, scalable embedding method. Raptor freezes a natural-image pre-trained 2D vision foundation model, extracts isotropic slice-level visual tokens, and applies random projection to compress spatial dimensions while aggregating semantic information across slices. Crucially, it constructs high-fidelity 3D volumetric semantic embeddings without introducing any trainable parameters or requiring medical imaging training dataβmarking the first such approach. Evaluated on ten downstream medical tasks, Raptor consistently outperforms existing state-of-the-art methods, achieving average performance gains of 3β14%. It significantly improves inference efficiency, cross-domain generalization, and clinical deployability.
π Abstract
Current challenges in developing foundational models for volumetric imaging data, such as magnetic resonance imaging (MRI), stem from the computational complexity of training state-of-the-art architectures in high dimensions and curating sufficiently large datasets of volumes. To address these challenges, we introduce Raptor (Random Planar Tensor Reduction), a train-free method for generating semantically rich embeddings for volumetric data. Raptor leverages a frozen 2D foundation model, pretrained on natural images, to extract visual tokens from individual cross-sections of medical volumes. These tokens are then spatially compressed using random projections, significantly reducing computational complexity while retaining semantic information. Extensive experiments on ten diverse medical volume tasks verify the superior performance of Raptor over state-of-the-art methods, including those pretrained exclusively on medical volumes (+3% SuPreM, +6% MISFM, +10% Merlin, +13% VoCo, and +14% SLIViT), while entirely bypassing the need for costly training. Our results highlight the effectiveness and versatility of Raptor as a foundation for advancing deep learning-based methods for medical volumes.