PRISM2: Unlocking Multi-Modal General Pathology AI with Clinical Dialogue

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current pathology foundation models lack whole-slide image (WSI) comprehension capability and are not trained on large-scale real-world clinical data, limiting their generalizability and clinical utility. This work introduces the first multimodal WSI foundation model designed explicitly for clinical diagnosis. We propose a novel two-stage, clinical-dialogue-driven training paradigm, leveraging 2.3 million WSIs paired with authentic pathology reports to achieve robust vision–language alignment. Our method integrates contrastive learning with image-captioning joint pretraining, coupled with a phased optimization strategy involving freezing and unfreezing of the language model. The resulting model enables zero-shot yes/no classification without prompt engineering or explicit class enumeration. It significantly outperforms baselines—including PRISM and TITAN—on diagnostic classification and biomarker prediction tasks, and surpasses CLIP-based approaches in zero-shot yes/no classification.

Technology Category

Application Category

📝 Abstract
Recent pathology foundation models can provide rich tile-level representations but fall short of delivering general-purpose clinical utility without further extensive model development. These models lack whole-slide image (WSI) understanding and are not trained with large-scale diagnostic data, limiting their performance on diverse downstream tasks. We introduce PRISM2, a multi-modal slide-level foundation model trained via clinical dialogue to enable scalable, generalizable pathology AI. PRISM2 is trained on nearly 700,000 specimens (2.3 million WSIs) paired with real-world clinical diagnostic reports in a two-stage process. In Stage 1, a vision-language model is trained using contrastive and captioning objectives to align whole slide embeddings with textual clinical diagnosis. In Stage 2, the language model is unfrozen to enable diagnostic conversation and extract more clinically meaningful representations from hidden states. PRISM2 achieves strong performance on diagnostic and biomarker prediction tasks, outperforming prior slide-level models including PRISM and TITAN. It also introduces a zero-shot yes/no classification approach that surpasses CLIP-style methods without prompt tuning or class enumeration. By aligning visual features with clinical reasoning, PRISM2 improves generalization on both data-rich and low-sample tasks, offering a scalable path forward for building general pathology AI agents capable of assisting diagnostic and prognostic decisions.
Problem

Research questions and friction points this paper is trying to address.

Enhances pathology AI with multi-modal clinical dialogue training
Improves WSI understanding using large-scale diagnostic data
Enables zero-shot classification without prompt tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal vision-language model for pathology
Two-stage training with clinical dialogue
Zero-shot classification without prompt tuning
🔎 Similar Papers
No similar papers found.
G
George Shaikovski
Paige, NYC, NY United States
Eugene Vorontsov
Eugene Vorontsov
Ecole Polytechnique de Montreal
A
Adam Casson
Paige, NYC, NY United States
J
Julian Viret
Paige, NYC, NY United States
E
Eric Zimmermann
Microsoft Research, Cambridge, MA United States
Neil Tenenholtz
Neil Tenenholtz
Microsoft Research
Y
Yi Kan Wang
Paige, NYC, NY United States
J
Jan H. Bernhard
Paige, NYC, NY United States
R
Ran A. Godrich
Paige, NYC, NY United States
J
Juan A. Retamero
Paige, NYC, NY United States
R
Razik Yousfi
Paige, NYC, NY United States
N
Nicolo Fusi
Microsoft Research, Cambridge, MA United States
Thomas J. Fuchs
Thomas J. Fuchs
Icahn School of Medicine at Mount Sinai
Machine LearningComputational PathologyDigital PathologyDeep LearningArtificial Intelligence
K
Kristen Severson
Microsoft Research, Cambridge, MA United States
S
Siqi Liu
Paige, NYC, NY United States