🤖 AI Summary
To address the scarcity of annotated medical imaging data and poor model generalizability in classifying focal liver lesions (FLLs), this paper proposes Liver-VLM, a dedicated vision-language model. Methodologically, it innovatively embeds fine-grained lesion category semantics into the text encoder, enabling semantic guidance at zero inference overhead. Leveraging a lightweight ResNet-18 visual backbone and a customized text encoder, Liver-VLM jointly optimizes image–text embedding alignment via cosine similarity and cross-entropy loss within the CLIP framework, thereby enhancing few-shot cross-modal matching accuracy. Evaluated on the MPCT-FLLs dataset, Liver-VLM significantly outperforms both CLIP and MedCLIP—particularly under few-shot settings—achieving notable gains in classification accuracy and AUC. This work establishes an efficient, deployable paradigm for low-resource hepatic imaging diagnosis.
📝 Abstract
Accurate classification of focal liver lesions is crucial for diagnosis and treatment in hepatology. However, traditional supervised deep learning models depend on large-scale annotated datasets, which are often limited in medical imaging. Recently, Vision-Language models (VLMs) such as Contrastive Language-Image Pre-training model (CLIP) has been applied to image classifications. Compared to the conventional convolutional neural network (CNN), which classifiers image based on visual information only, VLM leverages multimodal learning with text and images, allowing it to learn effectively even with a limited amount of labeled data. Inspired by CLIP, we pro-pose a Liver-VLM, a model specifically designed for focal liver lesions (FLLs) classification. First, Liver-VLM incorporates class information into the text encoder without introducing additional inference overhead. Second, by calculating the pairwise cosine similarities between image and text embeddings and optimizing the model with a cross-entropy loss, Liver-VLM ef-fectively aligns image features with class-level text features. Experimental results on MPCT-FLLs dataset demonstrate that the Liver-VLM model out-performs both the standard CLIP and MedCLIP models in terms of accuracy and Area Under the Curve (AUC). Further analysis shows that using a lightweight ResNet18 backbone enhances classification performance, particularly under data-constrained conditions.