Interpretable Few-Shot Retinal Disease Diagnosis with Concept-Guided Prompting of Vision-Language Models

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited interpretability, heavy reliance on large-scale labeled data, and poor generalizability to novel diseases in existing fundus image classification models, this paper proposes a concept-guided vision-language prompting framework. Our method distills clinically grounded concepts from a GPT-based knowledge base and injects them into a multimodal model, establishing a medical-knowledge-driven vision-language alignment mechanism that enables concept-level interpretability of diagnostic decisions. The approach integrates concept-guided prompt learning, few-shot/zero-shot transfer, and multimodal alignment modeling. On two fundus datasets, it achieves a 5.8% improvement in mean average precision (mAP) under 16-shot classification and a 2.7% gain in zero-shot novel disease detection mAP, significantly enhancing model generalizability and clinical trustworthiness. The core innovation lies in a knowledge-distillation-driven concept injection paradigm that unifies high accuracy, strong interpretability, and open-world adaptability.

Technology Category

Application Category

📝 Abstract
Recent advancements in deep learning have shown significant potential for classifying retinal diseases using color fundus images. However, existing works predominantly rely exclusively on image data, lack interpretability in their diagnostic decisions, and treat medical professionals primarily as annotators for ground truth labeling. To fill this gap, we implement two key strategies: extracting interpretable concepts of retinal diseases using the knowledge base of GPT models and incorporating these concepts as a language component in prompt-learning to train vision-language (VL) models with both fundus images and their associated concepts. Our method not only improves retinal disease classification but also enriches few-shot and zero-shot detection (novel disease detection), while offering the added benefit of concept-based model interpretability. Our extensive evaluation across two diverse retinal fundus image datasets illustrates substantial performance gains in VL-model based few-shot methodologies through our concept integration approach, demonstrating an average improvement of approximately 5.8% and 2.7% mean average precision for 16-shot learning and zero-shot (novel class) detection respectively. Our method marks a pivotal step towards interpretable and efficient retinal disease recognition for real-world clinical applications.
Problem

Research questions and friction points this paper is trying to address.

Improves retinal disease classification using interpretable concepts.
Enhances few-shot and zero-shot detection of novel retinal diseases.
Integrates vision-language models for better interpretability and performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts retinal disease concepts using GPT models
Integrates concepts into vision-language model training
Enhances few-shot and zero-shot disease detection
🔎 Similar Papers
No similar papers found.
Deval Mehta
Deval Mehta
Founding Member & Research Fellow at AIM for Health Lab | Monash University
Multi-modal AI for HealthcareFoundation Models / LLMsHealth Equity and Responsible AI
Y
Yiwen Jiang
AIM for Health Lab, Faculty of IT, Monash University, Melbourne, Australia; Faculty of Engineering, Monash University, Melbourne, Australia
C
C. Jan
Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
Mingguang He
Mingguang He
The Hong Kong Polytechnic University
Ophthalmoogy
Kshitij Jadhav
Kshitij Jadhav
IIT Bombay
AIML in Healthcare
Z
Zongyuan Ge
AIM for Health Lab, Faculty of IT, Monash University, Melbourne, Australia; Faculty of Engineering, Monash University, Melbourne, Australia; Airdoc-Monash Research Lab, Monash University, Melbourne, Australia