Bridging the Gap in Ophthalmic AI: MM-Retinal-Reason Dataset and OphthaReason Model toward Dynamic Multimodal Reasoning

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current ophthalmic multimodal models are limited to shallow reasoning tasks—such as visual matching—and struggle to integrate heterogeneous clinical data (e.g., chief complaints, medical history) for deep diagnostic reasoning required in real-world practice. To address this, we introduce MM-Retinal-Reason, the first comprehensive ophthalmic multimodal dataset spanning the full spectrum from perception to reasoning, and propose OphthaReason, a clinically grounded dynamic reasoning model. Its core innovation is Uncertainty-Aware Dynamic Thinking (UADT): leveraging entropy-based uncertainty estimation and a shaped advantage mechanism to adaptively modulate reasoning depth. OphthaReason integrates multimodal large language models, reinforcement learning, uncertainty modeling, and stepwise reasoning trajectory generation. Evaluated across multiple benchmarks, it significantly outperforms general-purpose, biomedical, and ophthalmology-specific models, achieving performance gains of 17.66%–24.92%.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have recently demonstrated remarkable reasoning abilities with reinforcement learning paradigm. Although several multimodal reasoning models have been explored in the medical domain, most of them focus exclusively on basic reasoning, which refers to shallow inference based on visual feature matching. However, real-world clinical diagnosis extends beyond basic reasoning, demanding reasoning processes that integrate heterogeneous clinical information (such as chief complaints and medical history) with multimodal medical imaging data. To bridge this gap, we introduce MM-Retinal-Reason, the first ophthalmic multimodal dataset with the full spectrum of perception and reasoning. It encompasses both basic reasoning tasks and complex reasoning tasks, aiming to enhance visual-centric fundamental reasoning capabilities and emulate realistic clinical thinking patterns. Building upon MM-Retinal-Reason, we propose OphthaReason, the first ophthalmology-specific multimodal reasoning model with step-by-step reasoning traces. To enable flexible adaptation to both basic and complex reasoning tasks, we specifically design a novel method called Uncertainty-Aware Dynamic Thinking (UADT), which estimates sample-level uncertainty via entropy and dynamically modulates the model's exploration depth using a shaped advantage mechanism. Comprehensive experiments demonstrate that our model achieves state-of-the-art performance on both basic and complex reasoning tasks, outperforming general-purpose MLLMs, medical MLLMs, RL-based medical MLLMs, and ophthalmic MLLMs by at least 24.92%, 15.00%, 21.20%, and 17.66%. Project Page: href{https://github.com/lxirich/OphthaReason}{link}.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap between basic and complex clinical reasoning in ophthalmology AI
Integrating heterogeneous clinical information with multimodal medical imaging data
Enhancing visual-centric reasoning capabilities to emulate realistic clinical thinking patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

OphthaReason model with step-by-step reasoning traces
Uncertainty-Aware Dynamic Thinking method via entropy
Dynamic modulation of exploration depth using shaped advantage
🔎 Similar Papers
No similar papers found.
R
Ruiqi Wu
School of Computer Science and Engineering, Southeast University, Nanjing, China
Y
Yuang Yao
School of Computer Science and Engineering, Southeast University, Nanjing, China
Tengfei Ma
Tengfei Ma
Stony Brook University
Natural Language ProcessingMachine LearningHealthcareGraph Neural Networks
C
Chenran Zhang
School of Computer Science and Engineering, Southeast University, Nanjing, China
N
Na Su
Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
T
Tao Zhou
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
G
Geng Chen
School of Computer Science, Northwestern Polytechnical University, Xi’an, China
Wen Fan
Wen Fan
University of California, Berkeley
Nanotechnology - Vanadium dioxide - 2D materials
Y
Yi Zhou
School of Computer Science and Engineering, Southeast University, Nanjing, China