🤖 AI Summary
To address poor generalization in few-shot whole-slide image (WSI) classification—caused by extreme image scale and sparse annotations—this paper proposes a multi-granularity vision-text prompt learning framework. Methodologically: (1) a multi-granularity attention mechanism is designed to model hierarchical interactions between learnable prompts and both individual image patches and patch groups; (2) an unbalanced optimal transport constraint is introduced to explicitly align visual embeddings with medical text embeddings, enhancing robustness to data augmentation; (3) building upon Prov-GigaPath, the framework integrates a biomedical text encoder, contrastive learning, and learnable prompt embeddings. Evaluated on lung, kidney, and breast WSI datasets, our method significantly outperforms CLIP, PLIP, and Prov-GigaPath-PLIP baselines, demonstrating superior generalization. The code and pretrained models are publicly available.
📝 Abstract
Whole slide pathology image classification presents challenges due to gigapixel image sizes and limited annotation labels, hindering model generalization. This paper introduces a prompt learning method to adapt large vision-language models for few-shot pathology classification. We first extend the Prov-GigaPath vision foundation model, pre-trained on 1.3 billion pathology image tiles, into a vision-language model by adding adaptors and aligning it with medical text encoders via contrastive learning on 923K image-text pairs. The model is then used to extract visual features and text embeddings from few-shot annotations and fine-tunes with learnable prompt embeddings. Unlike prior methods that combine prompts with frozen features using prefix embeddings or self-attention, we propose multi-granular attention that compares interactions between learnable prompts with individual image patches and groups of them. This approach improves the model's ability to capture both fine-grained details and broader context, enhancing its recognition of complex patterns across sub-regions. To further improve accuracy, we leverage (unbalanced) optimal transport-based visual-text distance to secure model robustness by mitigating perturbations that might occur during the data augmentation process. Empirical experiments on lung, kidney, and breast pathology modalities validate the effectiveness of our approach; thereby, we surpass several of the latest competitors and consistently improve performance across diverse architectures, including CLIP, PLIP, and Prov-GigaPath integrated PLIP. We release our implementations and pre-trained models at this MGPATH.