Token Merging via Spatiotemporal Information Mining for Surgical Video Understanding

๐Ÿ“… 2025-09-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Vision Transformers (ViTs) incur excessive computational overhead in surgical video understanding due to spatiotemporal token redundancy. Method: This paper proposes a training-free, decoupled token merging approach specifically designed for surgical videos. It introduces a spatiotemporal information-aware framework: temporal merging guided by saliency-weighted aggregation and spatial merging based on temporal stability analysis to dynamically preserve critical regionsโ€”thereby jointly preserving temporal continuity and spatial dynamics. The method requires no fine-tuning and is plug-and-play compatible with existing ViT architectures. Contribution/Results: Evaluated across multiple surgical video tasks, the approach achieves over 65% average reduction in GFLOPs while maintaining competitive accuracy. It significantly enhances modeling efficiency for long-sequence surgical videos without compromising performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision Transformer models have shown impressive effectiveness in the surgical video understanding tasks through long-range dependency modeling. However, current methods suffer from prohibitive computational costs due to processing massive spatiotemporal tokens across video frames. While prior work on token merging has advanced model efficiency, they fail to adequately consider the inherent spatiotemporal structure of video data and overlook the heterogeneous nature of information distribution, leading to suboptimal performance. In this paper, we propose a spatiotemporal information mining token merging (STIM-TM) method, representing the first dedicated approach for surgical video understanding. STIM-TM introduces a decoupled strategy that reduces token redundancy along temporal and spatial dimensions independently. Specifically, the temporal component merges spatially corresponding tokens from consecutive frames using saliency weighting, preserving critical sequential information and maintaining continuity. Meanwhile, the spatial component prioritizes merging static tokens through temporal stability analysis, protecting dynamic regions containing essential surgical information. Operating in a training-free manner, STIM-TM achieves significant efficiency gains with over $65%$ GFLOPs reduction while preserving competitive accuracy across comprehensive surgical video tasks. Our method also supports efficient training of long-sequence surgical videos, addressing computational bottlenecks in surgical applications.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in surgical video understanding
Merging redundant tokens along temporal and spatial dimensions
Preserving critical surgical information while improving efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled token merging along temporal and spatial dimensions
Saliency weighting merges spatially corresponding tokens temporally
Temporal stability analysis prioritizes merging static tokens spatially
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xixi Jiang
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
C
Chen Yang
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
D
Dong Zhang
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
Pingcheng Dong
Pingcheng Dong
Hong Kong University of Science and Technology
AI ChipModel CompressionHW/SW Co-Design
X
Xin Yang
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
K
Kwang-Ting Cheng
Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China