🤖 AI Summary
Existing CT report generation methods overly rely on global image features, often neglecting fine-grained lesion details, thereby compromising clinical relevance and interpretability. To address this, we propose a fine-grained multimodal report generation framework: (1) a region-representative token pooling mechanism guided by pseudo-masks enables lesion-level localization; (2) a patient-specific attribute textualization module improves semantic alignment between imaging findings and clinical semantics; and (3) a unified architecture integrates pretrained 2D vision models, general-purpose segmentation models, mask encoders, and multimodal large language models, jointly modeling multi-scale visual and linguistic features via region pooling and text-prompt fusion. Evaluated on the RadGenome-Chest CT dataset, our method achieves state-of-the-art performance, significantly improving report fluency, clinical accuracy, and lesion interpretability—demonstrating superior fidelity to radiological reasoning and diagnostic utility.
📝 Abstract
The recent release of RadGenome-Chest CT has significantly advanced CT-based report generation. However, existing methods primarily focus on global features, making it challenging to capture region-specific details, which may cause certain abnormalities to go unnoticed. To address this, we propose MedRegion-CT, a region-focused Multi-Modal Large Language Model (MLLM) framework, featuring three key innovations. First, we introduce Region Representative ($R^2$) Token Pooling, which utilizes a 2D-wise pretrained vision model to efficiently extract 3D CT features. This approach generates global tokens representing overall slice features and region tokens highlighting target areas, enabling the MLLM to process comprehensive information effectively. Second, a universal segmentation model generates pseudo-masks, which are then processed by a mask encoder to extract region-centric features. This allows the MLLM to focus on clinically relevant regions, using six predefined region masks. Third, we leverage segmentation results to extract patient-specific attributions, including organ size, diameter, and locations. These are converted into text prompts, enriching the MLLM's understanding of patient-specific contexts. To ensure rigorous evaluation, we conducted benchmark experiments on report generation using the RadGenome-Chest CT. MedRegion-CT achieved state-of-the-art performance, outperforming existing methods in natural language generation quality and clinical relevance while maintaining interpretability. The code for our framework is publicly available.