🤖 AI Summary
Fine-grained surgical activity recognition in multi-view operating room videos remains challenging due to reliance on precise camera calibration and coarse-grained human pose modeling. Method: We propose the first calibration-free, joint pre-training framework for multi-view surgical videos and 2D human pose estimation. Our approach introduces a discretized 2D pose encoding scheme and enforces cross- and intra-modal geometric consistency as a self-supervised pre-training objective. Leveraging a CLIP-style dual-encoder architecture, it integrates cross-view contrastive learning, masked pose token prediction, and multi-view geometric constraints. Contribution/Results: The framework enables both single- and multi-view activity recognition without camera calibration. Evaluated on two real-world OR datasets, it significantly outperforms strong baselines, especially under few-shot settings, demonstrating robustness, generalizability, and clinical applicability in complex surgical environments.
📝 Abstract
Understanding the workflow of surgical procedures in complex operating rooms requires a deep understanding of the interactions between clinicians and their environment. Surgical activity recognition (SAR) is a key computer vision task that detects activities or phases from multi-view camera recordings. Existing SAR models often fail to account for fine-grained clinician movements and multi-view knowledge, or they require calibrated multi-view camera setups and advanced point-cloud processing to obtain better results. In this work, we propose a novel calibration-free multi-view multi-modal pretraining framework called Multiview Pretraining for Video-Pose Surgical Activity Recognition PreViPS, which aligns 2D pose and vision embeddings across camera views. Our model follows CLIP-style dual-encoder architecture: one encoder processes visual features, while the other encodes human pose embeddings. To handle the continuous 2D human pose coordinates, we introduce a tokenized discrete representation to convert the continuous 2D pose coordinates into discrete pose embeddings, thereby enabling efficient integration within the dual-encoder framework. To bridge the gap between these two modalities, we propose several pretraining objectives using cross- and in-modality geometric constraints within the embedding space and incorporating masked pose token prediction strategy to enhance representation learning. Extensive experiments and ablation studies demonstrate improvements over the strong baselines, while data-efficiency experiments on two distinct operating room datasets further highlight the effectiveness of our approach. We highlight the benefits of our approach for surgical activity recognition in both multi-view and single-view settings, showcasing its practical applicability in complex surgical environments. Code will be made available at: https://github.com/CAMMA-public/PreViPS.