MIMO: A medical vision language model with visual referring multimodal input and pixel grounding multimodal output

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current medical vision-language models support only textual instruction input and plain-text output, limiting their capacity to model visual cues in medical images and localize critical anatomical regions. To address this, we propose the first unified multimodal architecture supporting visual referring inputs (e.g., bounding boxes or scribbles) and terminology-level pixel-grounded outputs—namely, segmentation masks—enabling bidirectional image–text alignment. Our method introduces a vision–language joint encoder–decoder framework, integrating self-supervised pretraining, instruction tuning, and a prompt-guided segmentation head. Evaluated across multiple medical multimodal benchmarks, our approach significantly outperforms state-of-the-art methods in visual understanding, cross-modal alignment, and fine-grained anatomical localization. It demonstrates strong generalization across diverse imaging modalities and clinical tasks. This work establishes a novel paradigm for clinically interpretable AI, bridging high-level semantic reasoning with precise, grounded visual perception.

Technology Category

Application Category

📝 Abstract
Currently, medical vision language models are widely used in medical vision question answering tasks. However, existing models are confronted with two issues: for input, the model only relies on text instructions and lacks direct understanding of visual clues in the image; for output, the model only gives text answers and lacks connection with key areas in the image. To address these issues, we propose a unified medical vision language model MIMO, with visual referring Multimodal Input and pixel grounding Multimodal Output. MIMO can not only combine visual clues and textual instructions to understand complex medical images and semantics, but can also ground medical terminologies in textual output within the image. To overcome the scarcity of relevant data in the medical field, we propose MIMOSeg, a comprehensive medical multimodal dataset including 895K samples. MIMOSeg is constructed from four different perspectives, covering basic instruction following and complex question answering with multimodal input and multimodal output. We conduct experiments on several downstream medical multimodal tasks. Extensive experimental results verify that MIMO can uniquely combine visual referring and pixel grounding capabilities, which are not available in previous models.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of visual clue understanding in medical image inputs
Solves absence of image region connection in text-based outputs
Overcomes medical multimodal data scarcity through comprehensive dataset creation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual referring multimodal input for medical images
Pixel grounding multimodal output for terminology localization
MIMOSeg dataset with 895K samples for training
🔎 Similar Papers
No similar papers found.
Y
Yanyuan Chen
School of Software & Microelectronics, Peking University
D
Dexuan Xu
School of Computer Science, Peking University
Y
Yu Huang
National Engineering Research Center for Software Engineering, Peking University
S
Songkun Zhan
School of Software & Microelectronics, Peking University
H
Hanpin Wang
School of Computer Science, Peking University
D
Dongxue Chen
Peking University Sixth Hospital
Xueping Wang
Xueping Wang
Hunan Normal University
computer vision
M
Meikang Qiu
Augusta University
H
Hang Li
Peking University First Hospital