🤖 AI Summary
Surgical smoke, specular reflections, and tissue deformation severely degrade the robustness of intraoperative point tracking, while existing datasets lack semantic context to diagnose failure modes. To address this, we introduce VL-SurgPT—the first vision-language surgical point tracking dataset—incorporating fine-grained textual descriptions of keypoint states (e.g., “partially occluded by smoke”, “distorted by mirror-like reflection”) to explicitly encode failure semantics. We further propose TG-SurgPT, a text-guided, context-aware tracking paradigm that dynamically modulates visual features using linguistic priors. Evaluating eight state-of-the-art trackers on VL-SurgPT, our method achieves a 12.7% improvement in mean Percentage of Correct Keypoints (mPCK) under smoke and reflection degradation, significantly outperforming vision-only baselines. This demonstrates the efficacy of multimodal semantic guidance in enhancing tracking robustness for surgical navigation.
📝 Abstract
Accurate point tracking in surgical environments remains challenging due to complex visual conditions, including smoke occlusion, specular reflections, and tissue deformation. While existing surgical tracking datasets provide coordinate information, they lack the semantic context necessary to understand tracking failure mechanisms. We introduce VL-SurgPT, the first large-scale multimodal dataset that bridges visual tracking with textual descriptions of point status in surgical scenes. The dataset comprises 908 in vivo video clips, including 754 for tissue tracking (17,171 annotated points across five challenging scenarios) and 154 for instrument tracking (covering seven instrument types with detailed keypoint annotations). We establish comprehensive benchmarks using eight state-of-the-art tracking methods and propose TG-SurgPT, a text-guided tracking approach that leverages semantic descriptions to improve robustness in visually challenging conditions. Experimental results demonstrate that incorporating point status information significantly improves tracking accuracy and reliability, particularly in adverse visual scenarios where conventional vision-only methods struggle. By bridging visual and linguistic modalities, VL-SurgPT enables the development of context-aware tracking systems crucial for advancing computer-assisted surgery applications that can maintain performance even under challenging intraoperative conditions.