🤖 AI Summary
Robust instrument and tissue segmentation in minimally invasive surgery (MIS) videos remains challenging due to severe occlusions, texture variations, and dynamic illumination changes. Method: This paper proposes an end-to-end semantic segmentation framework that synergistically integrates Contrastive Language–Image Pretraining (CLIP) with reinforcement learning (RL). CLIP serves as a multimodal feature encoder, while a policy-network-driven RL module—augmented by curriculum learning—dynamically optimizes mask generation to adapt to complex optical conditions. Contribution/Results: To our knowledge, this is the first work to jointly leverage CLIP’s open-vocabulary semantic understanding and RL’s sequential decision-making capability for MIS segmentation. Evaluated on the EndoVis 2018 and 2017 datasets, our method achieves mean Intersection-over-Union (mIoU) scores of 81.0% and 74.12%, respectively—substantially outperforming existing state-of-the-art approaches.
📝 Abstract
Understanding surgical scenes can provide better healthcare quality for patients, especially with the vast amount of video data that is generated during MIS. Processing these videos generates valuable assets for training sophisticated models. In this paper, we introduce CLIP-RL, a novel contrastive language-image pre-training model tailored for semantic segmentation for surgical scenes. CLIP-RL presents a new segmentation approach which involves reinforcement learning and curriculum learning, enabling continuous refinement of the segmentation masks during the full training pipeline. Our model has shown robust performance in different optical settings, such as occlusions, texture variations, and dynamic lighting, presenting significant challenges. CLIP model serves as a powerful feature extractor, capturing rich semantic context that enhances the distinction between instruments and tissues. The RL module plays a pivotal role in dynamically refining predictions through iterative action-space adjustments. We evaluated CLIP-RL on the EndoVis 2018 and EndoVis 2017 datasets. CLIP-RL achieved a mean IoU of 81%, outperforming state-of-the-art models, and a mean IoU of 74.12% on EndoVis 2017. This superior performance was achieved due to the combination of contrastive learning with reinforcement learning and curriculum learning.