🤖 AI Summary
Robust instrument recognition and localization in minimally invasive surgical endoscopic videos remains challenging under real-world conditions due to complex backgrounds, occlusions, and inter-center variability. Method: We introduce the first unified, multi-center cholecystectomy video dataset with synchronized annotations for surgical phase classification, instrument instance segmentation, and anatomical keypoint localization—enabling contextual and temporal modeling. We propose a temporal-aware multi-task learning framework that jointly optimizes all three tasks while explicitly incorporating surgical workflow priors. Evaluation strictly follows the BIAS guidelines to establish a high-quality benchmark for robot-assisted surgery. Contribution/Results: Our method significantly improves instrument localization accuracy under cluttered backgrounds and enhances cross-center generalization. It also advances intraoperative scene understanding by improving interpretability and clinical utility, setting a new standard for vision-based surgical intelligence.
📝 Abstract
Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context - such as the current procedural phase - has emerged as a promising strategy to improve robustness and interpretability.
To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures.
We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding.