🤖 AI Summary
SAM2’s greedy memory mechanism struggles with rapid instrument motion, frequent occlusions, and complex instrument–tissue interactions in surgical video segmentation, leading to degraded performance on long videos. To address this, we propose a training-free video object segmentation method built upon the SAM2 framework. Our core innovation is a context-aware and occlusion-resilient memory module, integrated with a multi-object, single-pass, single-prompt inference mechanism that enables dynamic memory updating and robust occlusion recovery. The method introduces no additional parameters, preserving SAM2’s zero-parameter growth property. It significantly enhances segmentation robustness and inter-frame consistency in long surgical videos with multiple instruments. On EndoVis2017 and EndoVis2018 benchmarks, our approach achieves absolute mIoU improvements of +4.36% and +6.1% over SAM2, respectively, delivering more accurate and temporally stable surgical instrument segmentation.
📝 Abstract
Surgical video segmentation is a critical task in computer-assisted surgery, essential for enhancing surgical quality and patient outcomes. Recently, the Segment Anything Model 2 (SAM2) framework has demonstrated remarkable advancements in both image and video segmentation. However, the inherent limitations of SAM2's greedy selection memory design are amplified by the unique properties of surgical videos-rapid instrument movement, frequent occlusion, and complex instrument-tissue interaction-resulting in diminished performance in the segmentation of complex, long videos. To address these challenges, we introduce Memory Augmented (MA)-SAM2, a training-free video object segmentation strategy, featuring novel context-aware and occlusion-resilient memory models. MA-SAM2 exhibits strong robustness against occlusions and interactions arising from complex instrument movements while maintaining accuracy in segmenting objects throughout videos. Employing a multi-target, single-loop, one-prompt inference further enhances the efficiency of the tracking process in multi-instrument videos. Without introducing any additional parameters or requiring further training, MA-SAM2 achieved performance improvements of 4.36% and 6.1% over SAM2 on the EndoVis2017 and EndoVis2018 datasets, respectively, demonstrating its potential for practical surgical applications.