🤖 AI Summary
To address challenges in PET-CT multimodal lung tumor segmentation—including difficult anatomical–functional information fusion and high computational overhead—this paper proposes vMambaX, a lightweight framework. Its core is the Context-Gated Cross-modal Perception (CGCP) module, which employs a dynamic gating mechanism to adaptively emphasize discriminative regions while suppressing modality-specific noise, thereby enabling efficient, accurate, and multi-scale interaction between PET and CT features. Built upon the Visual Mamba architecture, vMambaX significantly reduces parameter count and computational complexity. Evaluated on the PCLT20K dataset, vMambaX achieves superior segmentation accuracy (e.g., +2.3% Dice score) with substantially lower resource consumption, demonstrating its effectiveness, efficiency, and scalability for precision diagnosis and treatment of lung cancer.
📝 Abstract
Accurate lung tumor segmentation is vital for improving diagnosis and treatment planning, and effectively combining anatomical and functional information from PET and CT remains a major challenge. In this study, we propose vMambaX, a lightweight multimodal framework integrating PET and CT scan images through a Context-Gated Cross-Modal Perception Module (CGM). Built on the Visual Mamba architecture, vMambaX adaptively enhances inter-modality feature interaction, emphasizing informative regions while suppressing noise. Evaluated on the PCLT20K dataset, the model outperforms baseline models while maintaining lower computational complexity. These results highlight the effectiveness of adaptive cross-modal gating for multimodal tumor segmentation and demonstrate the potential of vMambaX as an efficient and scalable framework for advanced lung cancer analysis. The code is available at https://github.com/arco-group/vMambaX.