🤖 AI Summary
This work addresses the challenge of fine-grained event recognition in surgical videos, where annotations are scarce and precise temporal understanding is essential. To this end, the authors propose a video–language pre-training framework tailored for long-duration surgical procedures. By introducing a context-aware multimodal alignment mechanism, they design several novel pre-training objectives—including Contextual Video–Text Contrastive learning (VTC_CTX), Clip Order Prediction (COP), cycle-consistent alignment, and Frame–Text Matching (FTM)—to effectively enhance both local and global semantic consistency as well as temporal modeling capabilities. Evaluated under a zero-shot setting, the proposed method achieves new state-of-the-art performance across multiple public surgical video benchmarks for tasks including surgical phase, step, instrument, and triplet recognition.
📝 Abstract
Video-language foundation models have proven to be highly effective in zero-shot applications across a wide range of tasks. A particularly challenging area is the intraoperative surgical procedure domain, where labeled data is scarce, and precise temporal understanding is often required for complex downstream tasks. To address this challenge, we introduce CliPPER (Contextual Video-Language Pretraining on Long-form Intraoperative Surgical Procedures for Event Recognition), a novel video-language pretraining framework trained on surgical lecture videos. Our method is designed for fine-grained temporal video-text recognition and introduces several novel pretraining strategies to improve multimodal alignment in long-form surgical videos. Specifically, we propose Contextual Video-Text Contrastive Learning (VTC_CTX) and Clip Order Prediction (COP) pretraining objectives, both of which leverage temporal and contextual dependencies to enhance local video understanding. In addition, we incorporate a Cycle-Consistency Alignment over video-text matches within the same surgical video to enforce bidirectional consistency and improve overall representation coherence. Moreover, we introduce a more refined alignment loss, Frame-Text Matching (FTM), to improve the alignment between video frames and text. As a result, our model establishes a new state-of-the-art across multiple public surgical benchmarks, including zero-shot recognition of phases, steps, instruments, and triplets. The source code and pretraining captions can be found at https://github.com/CAMMA-public/CliPPER.