🤖 AI Summary
General-purpose vision-language models (VLMs) exhibit poor domain adaptation, severe hallucination, and weak medical terminology comprehension when applied to low-dose radiotherapy (LDRT) medical image understanding. Method: We introduce LDRT-VQA—the first multilingual visual question answering benchmark tailored to radiation oncology—and propose an LLM-as-a-judge paradigm for quantitative hallucination evaluation. We further design a joint fine-tuning framework integrating cross-modal projection with domain-knowledge alignment. Built upon the LLaVA architecture, our model combines ResNet-101 and Llama-2, employing multi-stage instruction tuning and contrastive alignment trained on 42,673 multilingual radiology publications. Contribution/Results: On LDRT-VQA, our method achieves an 18.7% absolute accuracy gain over baselines, reduces hallucination rate by 41.3%, and attains 89.5% F1-score on domain-specific terminology—significantly enhancing the reliability and clinical trustworthiness of image-based reasoning.
📝 Abstract
Large language models (LLMs) have demonstrated immense capabilities in understanding textual data and are increasingly being adopted to help researchers accelerate scientific discovery through knowledge extraction (information retrieval), knowledge distillation (summarizing key findings and methodologies into concise forms), and knowledge synthesis (aggregating information from multiple scientific sources to address complex queries, generate hypothesis and formulate experimental plans). However, scientific data often exists in both visual and textual modalities. Vision language models (VLMs) address this by incorporating a pretrained vision backbone for processing images and a cross-modal projector that adapts image tokens into the LLM dimensional space, thereby providing richer multimodal comprehension. Nevertheless, off-the-shelf VLMs show limited capabilities in handling domain-specific data and are prone to hallucinations. We developed intelligent assistants finetuned from LLaVA models to enhance multimodal understanding in low-dose radiation therapy (LDRT)-a benign approach used in the treatment of cancer-related illnesses. Using multilingual data from 42,673 articles, we devise complex reasoning and detailed description tasks for visual question answering (VQA) benchmarks. Our assistants, trained on 50,882 image-text pairs, demonstrate superior performance over base models as evaluated using LLM-as-a-judge approach, particularly in reducing hallucination and improving domain-specific comprehension.