Multimodal Carotid Risk Stratification with Large Vision-Language Models: Benchmarking, Fine-Tuning, and Clinical Insights

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Carotid atherosclerosis risk assessment faces challenges including difficulty in fusing heterogeneous multimodal data (ultrasound images, clinical notes, laboratory results, and protein biomarkers), poor model interpretability, and low clinical transparency. To address these, we propose a question-answering–driven multimodal large language model framework, built upon LLaVA-NeXT-Vicuna and fine-tuned for ultrasound domains via LoRA. Structured tabular data are encoded into semantically rich textual representations and jointly fed with ultrasound images for end-to-end stroke risk stratification. We innovatively integrate domain adaptation and multimodal text enhancement to significantly improve both interpretability and clinical utility. Experiments demonstrate superior performance over CNN baselines in specificity and balanced accuracy. This work establishes a novel, trustworthy paradigm for multimodal clinical decision support in cerebrovascular risk assessment.

Technology Category

Application Category

📝 Abstract
Reliable risk assessment for carotid atheromatous disease remains a major clinical challenge, as it requires integrating diverse clinical and imaging information in a manner that is transparent and interpretable to clinicians. This study investigates the potential of state-of-the-art and recent large vision-language models (LVLMs) for multimodal carotid plaque assessment by integrating ultrasound imaging (USI) with structured clinical, demographic, laboratory, and protein biomarker data. A framework that simulates realistic diagnostic scenarios through interview-style question sequences is proposed, comparing a range of open-source LVLMs, including both general-purpose and medically tuned models. Zero-shot experiments reveal that even if they are very powerful, not all LVLMs can accurately identify imaging modality and anatomy, while all of them perform poorly in accurate risk classification. To address this limitation, LLaVa-NeXT-Vicuna is adapted to the ultrasound domain using low-rank adaptation (LoRA), resulting in substantial improvements in stroke risk stratification. The integration of multimodal tabular data in the form of text further enhances specificity and balanced accuracy, yielding competitive performance compared to prior convolutional neural network (CNN) baselines trained on the same dataset. Our findings highlight both the promise and limitations of LVLMs in ultrasound-based cardiovascular risk prediction, underscoring the importance of multimodal integration, model calibration, and domain adaptation for clinical translation.
Problem

Research questions and friction points this paper is trying to address.

Integrating multimodal clinical data for carotid disease risk assessment
Improving stroke risk stratification using adapted vision-language models
Benchmarking model performance on ultrasound imaging and clinical data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLaVA model using LoRA adaptation
Integrated ultrasound imaging with clinical data
Enhanced stroke risk stratification via multimodal fusion
🔎 Similar Papers
No similar papers found.