🤖 AI Summary
To address the limited interpretability, heavy reliance on large-scale labeled data, and poor generalizability to novel diseases in existing fundus image classification models, this paper proposes a concept-guided vision-language prompting framework. Our method distills clinically grounded concepts from a GPT-based knowledge base and injects them into a multimodal model, establishing a medical-knowledge-driven vision-language alignment mechanism that enables concept-level interpretability of diagnostic decisions. The approach integrates concept-guided prompt learning, few-shot/zero-shot transfer, and multimodal alignment modeling. On two fundus datasets, it achieves a 5.8% improvement in mean average precision (mAP) under 16-shot classification and a 2.7% gain in zero-shot novel disease detection mAP, significantly enhancing model generalizability and clinical trustworthiness. The core innovation lies in a knowledge-distillation-driven concept injection paradigm that unifies high accuracy, strong interpretability, and open-world adaptability.
📝 Abstract
Recent advancements in deep learning have shown significant potential for classifying retinal diseases using color fundus images. However, existing works predominantly rely exclusively on image data, lack interpretability in their diagnostic decisions, and treat medical professionals primarily as annotators for ground truth labeling. To fill this gap, we implement two key strategies: extracting interpretable concepts of retinal diseases using the knowledge base of GPT models and incorporating these concepts as a language component in prompt-learning to train vision-language (VL) models with both fundus images and their associated concepts. Our method not only improves retinal disease classification but also enriches few-shot and zero-shot detection (novel disease detection), while offering the added benefit of concept-based model interpretability. Our extensive evaluation across two diverse retinal fundus image datasets illustrates substantial performance gains in VL-model based few-shot methodologies through our concept integration approach, demonstrating an average improvement of approximately 5.8% and 2.7% mean average precision for 16-shot learning and zero-shot (novel class) detection respectively. Our method marks a pivotal step towards interpretable and efficient retinal disease recognition for real-world clinical applications.