🤖 AI Summary
This work addresses the critical challenge of evaluating hallucinations and subtle errors in pathology vision-language models (VLMs), where the absence of reliable reference-free metrics hinders clinical deployment. To this end, we propose PathGLS—the first multidimensional, reference-free evaluation framework tailored for pathological VLMs—that quantifies model reliability along three dimensions: fine-grained grounding, logical consistency, and output stability. PathGLS integrates visual-textual alignment analysis, natural language inference–based entailment consistency checking, and output variance assessment under adversarial perturbations, enabling effective estimation of hallucination rates and robustness to domain shifts without ground-truth labels. Experiments demonstrate that PathGLS achieves a 40.2% sensitivity to hallucinated reports—substantially outperforming BERTScore (2.1%)—and exhibits strong correlation (Spearman ρ = 0.71) with expert clinical error ratings, significantly surpassing existing LLM-based baselines.
📝 Abstract
Vision-Language Models (VLMs) offer significant potential in computational pathology by enabling interpretable image analysis, automated reporting, and scalable decision support. However, their widespread clinical adoption remains limited due to the absence of reliable, automated evaluation metrics capable of identifying subtle failures such as hallucinations. To address this gap, we propose PathGLS, a novel reference-free evaluation framework that assesses pathology VLMs across three dimensions: Grounding (fine-grained visual-text alignment), Logic (entailment graph consistency using Natural Language Inference), and Stability (output variance under adversarial visual-semantic perturbations). PathGLS supports both patch-level and whole-slide image (WSI)-level analysis, yielding a comprehensive trust score. Experiments on Quilt-1M, TCGA, REG2025, PathMMU and TCGA-Sarcoma datasets demonstrate the superiority of PathGLS. Specifically, on the Quilt-1M dataset, PathGLS reveals a steep sensitivity drop of 40.2% for hallucinated reports compared to only 2.1% for BERTScore. Moreover, validation against expert-defined clinical error hierarchies reveals that PathGLS achieves a strong Spearman's rank correlation of $ρ=0.71$ ($p < 0.0001$), significantly outperforming Large Language Model (LLM)-based approaches (Gemini 3.0 Pro: $ρ=0.39$, $p < 0.0001$). These results establish PathGLS as a robust reference-free metric. By directly quantifying hallucination rates and domain shift robustness, it serves as a reliable criterion for benchmarking VLMs on private clinical datasets and informing safe deployment. Code can be found at: https://github.com/My13ad/PathGLS