🤖 AI Summary
Foundational models in medical image analysis lack a unified framework, hindering systematic understanding of architectural evolution, training paradigms, and clinical translation.
Method: We conduct the first structured taxonomy and cross-modal meta-analysis of vision and vision-language foundational models, synthesizing >120 studies to quantitatively characterize trends in multimodal fusion, few-shot adaptation, and clinical deployment. We propose novel pathways—including federated learning for privacy-preserving training, knowledge distillation for model lightweighting, and prompt engineering for zero/few-shot generalization—and systematically evaluate domain adaptation, efficient fine-tuning, interpretability, and prompting strategies for clinical applicability.
Contribution/Results: Foundational models significantly outperform conventional methods in zero- and few-shot settings; data usage increasingly favors multi-center, small-scale cohorts; and applications concentrate on lesion segmentation and diagnostic support. This work establishes a theoretical foundation and practical roadmap for standardizing and clinically deploying medical foundational models.
📝 Abstract
Recent advancements in artificial intelligence (AI), particularly foundation models (FMs), have revolutionized medical image analysis, demonstrating strong zero- and few-shot performance across diverse medical imaging tasks, from segmentation to report generation. Unlike traditional task-specific AI models, FMs leverage large corpora of labeled and unlabeled multimodal datasets to learn generalized representations that can be adapted to various downstream clinical applications with minimal fine-tuning. However, despite the rapid proliferation of FM research in medical imaging, the field remains fragmented, lacking a unified synthesis that systematically maps the evolution of architectures, training paradigms, and clinical applications across modalities. To address this gap, this review article provides a comprehensive and structured analysis of FMs in medical image analysis. We systematically categorize studies into vision-only and vision-language FMs based on their architectural foundations, training strategies, and downstream clinical tasks. Additionally, a quantitative meta-analysis of the studies was conducted to characterize temporal trends in dataset utilization and application domains. We also critically discuss persistent challenges, including domain adaptation, efficient fine-tuning, computational constraints, and interpretability along with emerging solutions such as federated learning, knowledge distillation, and advanced prompting. Finally, we identify key future research directions aimed at enhancing the robustness, explainability, and clinical integration of FMs, thereby accelerating their translation into real-world medical practice.