Raptor: Scalable Train-Free Embeddings for 3D Medical Volumes Leveraging Pretrained 2D Foundation Models

πŸ“… 2025-07-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high computational cost and reliance on large-scale annotated data in training 3D medical volumetric foundation models (e.g., for MRI), this paper introduces Raptor: a fully training-free, scalable embedding method. Raptor freezes a natural-image pre-trained 2D vision foundation model, extracts isotropic slice-level visual tokens, and applies random projection to compress spatial dimensions while aggregating semantic information across slices. Crucially, it constructs high-fidelity 3D volumetric semantic embeddings without introducing any trainable parameters or requiring medical imaging training dataβ€”marking the first such approach. Evaluated on ten downstream medical tasks, Raptor consistently outperforms existing state-of-the-art methods, achieving average performance gains of 3–14%. It significantly improves inference efficiency, cross-domain generalization, and clinical deployability.

Technology Category

Application Category

πŸ“ Abstract
Current challenges in developing foundational models for volumetric imaging data, such as magnetic resonance imaging (MRI), stem from the computational complexity of training state-of-the-art architectures in high dimensions and curating sufficiently large datasets of volumes. To address these challenges, we introduce Raptor (Random Planar Tensor Reduction), a train-free method for generating semantically rich embeddings for volumetric data. Raptor leverages a frozen 2D foundation model, pretrained on natural images, to extract visual tokens from individual cross-sections of medical volumes. These tokens are then spatially compressed using random projections, significantly reducing computational complexity while retaining semantic information. Extensive experiments on ten diverse medical volume tasks verify the superior performance of Raptor over state-of-the-art methods, including those pretrained exclusively on medical volumes (+3% SuPreM, +6% MISFM, +10% Merlin, +13% VoCo, and +14% SLIViT), while entirely bypassing the need for costly training. Our results highlight the effectiveness and versatility of Raptor as a foundation for advancing deep learning-based methods for medical volumes.
Problem

Research questions and friction points this paper is trying to address.

Develop train-free embeddings for 3D medical volumes
Leverage pretrained 2D models to avoid costly training
Reduce computational complexity while retaining semantic information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Train-free embeddings for 3D medical volumes
Leverages frozen 2D foundation models
Uses random projections for spatial compression
πŸ”Ž Similar Papers
No similar papers found.