đ¤ AI Summary
Existing CLIP-based models rely on global image features, limiting zero-shot fine-grained surgical action triplet (subjectâverbâobject) recognitionâparticularly in generalizing to unseen anatomical structures and instrumentâverb combinations. To address this, we propose a triplet-aware visionâlanguage learning framework: (i) hierarchical prompt modeling, explicitly encoding semantic hierarchies of subject, verb, and object; (ii) LoRA-driven adaptive fine-tuning of the visual backbone to enhance object-centric, fine-grained representations; and (iii) graph-structured patch clustering distillation, jointly consolidating anatomical, instrumental, and action-related features. We introduce the first benchmark enabling dual-axis zero-shot generalizationâacross both anatomical targets and instrumentâverb pairs. Evaluated on CholecT50, our method achieves significant improvements in F1-score and mean average precision (mAP), marking the first demonstration of robust zero-shot recognition for previously unseen anatomical regions and instrumentâverb compositions.
đ Abstract
While vision-language models like CLIP have advanced zero-shot surgical phase recognition, they struggle with fine-grained surgical activities, especially action triplets. This limitation arises because current CLIP formulations rely on global image features, which overlook the fine-grained semantics and contextual details crucial for complex tasks like zero-shot triplet recognition. Furthermore, these models do not explore the hierarchical structure inherent in triplets, reducing their ability to generalize to novel triplets. To address these challenges, we propose fine-CLIP, which learns object-centric features and lever- ages the hierarchy in triplet formulation. Our approach integrates three components: hierarchical prompt modeling to capture shared semantics, LoRA-based vision backbone adaptation for enhanced feature extraction, and a graph-based condensation strategy that groups similar patch features into meaningful object clusters. Since triplet classification is a challenging task, we introduce an alternative yet meaningful base-to-novel generalization benchmark with two settings on the CholecT50 dataset: Unseen-Target, assessing adaptability to triplets with novel anatomical structures, and Unseen-Instrument-Verb, where models need to generalize to novel instrument-verb interactions. fine-CLIP shows significant improvements in F1 and mAP, enhancing zero-shot recognition of novel surgical triplets.