🤖 AI Summary
This study addresses the underutilization of medical knowledge in cross-modal chest X-ray (CXR) classification. We propose a set-theoretic knowledge injection framework that explicitly models anatomical structures, pathological features, and clinical relationships to generate fine-grained, controllable-granularity medical descriptive texts for targeted CLIP fine-tuning. Our method integrates domain-specific large language models with zero-shot learning, achieving 72.5% zero-shot classification accuracy on CheXpert—substantially outperforming human-annotated text baselines (49.9%). Key contributions include: (1) the first set-theory-driven, interpretable knowledge injection paradigm for medical vision-language modeling; (2) empirical validation that fine-grained, high-density medical knowledge critically enhances cross-modal diagnostic performance; and (3) a scalable, tunable knowledge-augmentation pathway toward clinically deployable image understanding.
📝 Abstract
The integration of artificial intelligence in medical imaging has shown tremendous potential, yet the relationship between pre-trained knowledge and performance in cross-modality learning remains unclear. This study investigates how explicitly injecting medical knowledge into the learning process affects the performance of cross-modality classification, focusing on Chest X-ray (CXR) images. We introduce a novel Set Theory-based knowledge injection framework that generates captions for CXR images with controllable knowledge granularity. Using this framework, we fine-tune CLIP model on captions with varying levels of medical information. We evaluate the model's performance through zero-shot classification on the CheXpert dataset, a benchmark for CXR classification. Our results demonstrate that injecting fine-grained medical knowledge substantially improves classification accuracy, achieving 72.5% compared to 49.9% when using human-generated captions. This highlights the crucial role of domain-specific knowledge in medical cross-modality learning. Furthermore, we explore the influence of knowledge density and the use of domain-specific Large Language Models (LLMs) for caption generation, finding that denser knowledge and specialized LLMs contribute to enhanced performance. This research advances medical image analysis by demonstrating the effectiveness of knowledge injection for improving automated CXR classification, paving the way for more accurate and reliable diagnostic tools.