🤖 AI Summary
Multimodal large language models (MLLMs) for medical ultrasound suffer from performance limitations due to the scarcity of high-quality, domain-specific image-text-video data.
Method: We propose the first reasoning-oriented quadruple (image–question–reasoning chain–answer) auto-construction paradigm tailored for ultrasound diagnosis, overcoming the bottleneck of directly leveraging unstructured sources (e.g., PDFs, clinical images) for MLLM training. Our approach integrates domain-adaptive PDF/image parsing, knowledge distillation, and chain-of-thought-guided data synthesis.
Contribution/Results: We release ReMUD—the first large-scale ultrasound VQA/QA hybrid dataset (45K+ samples)—and the open-source ReMUD-7B model, a supervised fine-tuned variant of Qwen2.5-VL-7B-Instruct. Experiments demonstrate that ReMUD-7B significantly outperforms general-purpose MLLMs on ultrasound diagnostic tasks. All data, code, and model weights are publicly released, establishing a new paradigm for vertical-domain multimodal AI deployment.
📝 Abstract
Multimodal large language models (MLLMs) have shown great potential in general domains but perform poorly in some specific domains due to a lack of domain-specific data, such as image-text data or vedio-text data. In some specific domains, there is abundant graphic and textual data scattered around, but lacks standardized arrangement. In the field of medical ultrasound, there are ultrasonic diagnostic books, ultrasonic clinical guidelines, ultrasonic diagnostic reports, and so on. However, these ultrasonic materials are often saved in the forms of PDF, images, etc., and cannot be directly used for the training of MLLMs. This paper proposes a novel image-text reasoning supervised fine-tuning data generation pipeline to create specific domain quadruplets (image, question, thinking trace, and answer) from domain-specific materials. A medical ultrasound domain dataset ReMUD is established, containing over 45,000 reasoning and non-reasoning supervised fine-tuning Question Answering (QA) and Visual Question Answering (VQA) data. The ReMUD-7B model, fine-tuned on Qwen2.5-VL-7B-Instruct, outperforms general-domain MLLMs in medical ultrasound field. To facilitate research, the ReMUD dataset, data generation codebase, and ReMUD-7B parameters will be released at https://github.com/ShiDaizi/ReMUD, addressing the data shortage issue in specific domain MLLMs.