🤖 AI Summary
Clinical deployment of self-supervised medical foundation models faces challenges from ground-truth scarcity and unreliable out-of-distribution (OOD) generalization. Method: We systematically evaluate Swim UNETR, SimMIM, iBOT, and SMIT for lung cancer CT segmentation, assessing their uncertainty quantification and OOD robustness. We propose a lightweight, unsupervised uncertainty metric based on entropy and volumetric occupancy—enabling model reliability ranking and OOD performance evaluation without ground-truth annotations. Contribution/Results: Cross-domain validation across LRAD, 5Rater, and pulmonary embolism CT datasets shows SMIT achieves the highest F1-score (0.64) on the lung cancer test set, the lowest uncertainty entropy (0.12), and the smallest false-positive tumor volume on OOD pulmonary embolism data (5.67 cc), significantly outperforming alternatives. This work establishes an interpretable, reusable evaluation paradigm for trustworthy clinical deployment of self-supervised models.
📝 Abstract
Medical image foundation models have shown the ability to segment organs and tumors with minimal fine-tuning. These models are typically evaluated on task-specific in-distribution (ID) datasets. However, reliable performance on ID datasets does not guarantee robust generalization on out-of-distribution (OOD) datasets. Importantly, once deployed for clinical use, it is impractical to have `ground truth' delineations to assess ongoing performance drifts, especially when images fall into the OOD category due to different imaging protocols. Hence, we introduced a comprehensive set of computationally fast metrics to evaluate the performance of multiple foundation models (Swin UNETR, SimMIM, iBOT, SMIT) trained with self-supervised learning (SSL). All models were fine-tuned on identical datasets for lung tumor segmentation from computed tomography (CT) scans. The evaluation was performed on two public lung cancer datasets (LRAD: n = 140, 5Rater: n = 21) with different image acquisitions and tumor stages compared to training data (n = 317 public resource with stage III-IV lung cancers) and a public non-cancer dataset containing volumetric CT scans of patients with pulmonary embolism (n = 120). All models produced similarly accurate tumor segmentation on the lung cancer testing datasets. SMIT produced the highest F1-score (LRAD: 0.60, 5Rater: 0.64) and lowest entropy (LRAD: 0.06, 5Rater: 0.12), indicating higher tumor detection rate and confident segmentations. In the OOD dataset, SMIT misdetected the least number of tumors, marked by a median volume occupancy of 5.67 cc compared to the best method SimMIM of 9.97 cc. Our analysis shows that additional metrics such as entropy and volume occupancy may help better understand model performance on mixed domain datasets.