🤖 AI Summary
Current vision-language models (VLMs) suffer from weak logical reasoning and insufficient self-correction capabilities in medical report generation, leading to diagnostic inconsistencies and uncorrected errors. To address these limitations, we propose a perception-guided complex reasoning and dynamic reflection framework. Our approach uniquely integrates structured medical knowledge injection, perception-tree constrained modeling, and a multi-step verifiable reflection mechanism—jointly ensuring diagnostic logical consistency and enabling real-time error detection and correction during generation. The method builds upon a fine-tuned VLM architecture, incorporating four core modules: knowledge injection, perception enhancement, tree-structured reasoning guidance, and self-validating reflection. Evaluated on IU-Xray and MIMIC-CXR, our framework reduces report logical error rate by 32.7% and improves diagnostic accuracy by 11.4%, while significantly outperforming baselines across standard NLG metrics and clinically grounded evaluation criteria.
📝 Abstract
Large vision-language models (LVMs) hold a great promise for automating medical report generation, potentially reducing the burden of manual reporting. State-of-the-art (SOTA) research fine-tunes general LVMs with medical data to align radiology images to corresponding medical reports. However, there are two key factors that limit these LVM's performance. Firstly, LVMs lack complex reasoning capability that leads to logical inconsistencies and potential diagnostic errors in generated reports. Secondly, LVMs lack reflection mechanism that leads to an inability to discover errors in the thinking process. To address these gaps, we propose LVMed-R2, a new fine-tuning strategy that introduces complex reasoning and reflection mechanisms for LVMs to enhance medical report generation. To the best of our knowledge, this is the first work to introduce complex reasoning to the medical report generation (MRG) task. Our proposed complex reasoning contains medical knowledge injection and perception-enhancing modules which improve the accuracy of LVMs diagnosis, coupled with a perception tree to provide guidance to limit the perception range. Further, the reflection mechanism forces self-verification for outputs to correct for potential errors. We experimented by fine-tuning LVMs with our proposed LVMed-R2 strategy, using IU-Xray and MIMIC-CXR datasets. Our results, measured on natural language generation (NLG) metrics and clinical efficacy (CE) metrics, demonstrate that LVMs fine-tuned with the proposed reflection mechanism possess the ability to correct outputs and complex reasoning effectively and improve LVMs performance for MRG.