๐ค AI Summary
To address the heavy reliance on manual annotations and poor generalization in surgical instrument recognition from video, this paper introduces RASO, an open-set surgical recognition foundation model. Methodologically, we propose the first weakly supervised learning framework that automatically constructs 3.6 million imageโtextโlabel triplets from 2,200 unlabeled surgical tutorial videos, covering 2,066 fine-grained surgical instruments and anatomical structures; the framework integrates multimodal alignment, large-scale video parsing, contrastive learning, and prompt-based fine-tuning. Our key contributions are: (1) the first zero-shot open-set recognition model capable of cross-procedure and cross-modal (image/video) inference without predefined categories; (2) average zero-shot mAP improvements of +7.3 across four major surgical benchmarks; and (3) state-of-the-art performance in fully supervised surgical action recognition. The code, models, and dataset are fully open-sourced.
๐ Abstract
We present RASO, a foundation model designed to Recognize Any Surgical Object, offering robust open-set recognition capabilities across a broad range of surgical procedures and object classes, in both surgical images and videos. RASO leverages a novel weakly-supervised learning framework that generates tag-image-text pairs automatically from large-scale unannotated surgical lecture videos, significantly reducing the need for manual annotations. Our scalable data generation pipeline gatherers to 2,200 surgical procedures and produces 3.6 million tag annotations across 2,066 unique surgical tags. Our experiments show that RASO achieves improvements of 2.9 mAP, 4.5 mAP, 10.6 mAP, and 7.2 mAP on four standard surgical benchmarks respectively in zero-shot settings, and surpasses state-of-the-art models in supervised surgical action recognition tasks. We will open-source our code, model, and dataset to facilitate further research.