🤖 AI Summary
Video large language models (Video LLMs) suffer from high computational overhead and low inference efficiency due to redundant visual tokens. Existing inner-layer pruning incurs non-negligible shallow computation, while outer-layer pruning only addresses local spatiotemporal redundancy, neglecting global temporal dynamics in long videos—limiting overall spatiotemporal compression capability—and the synergy between inner and outer pruning remains unexplored. This paper proposes the first training-free, inner-outer collaborative token fusion framework: outer-layer pruning segments video clips via global temporal awareness and merges cross-frame spatiotemporal tokens; inner-layer pruning dynamically fuses similar tokens in a similarity-driven, adaptive manner. Evaluated on LLaVA-OneVision-7B, our method reduces FLOPs to 6.9% of baseline, retains 99.1% of original performance, cuts time-to-first-token (TTFT) by 2.28×, and improves decoding throughput by 1.32×—significantly enhancing efficiency and scalability for video understanding.
📝 Abstract
Video large language models (video LLMs) excel at video comprehension but face significant computational inefficiency due to redundant video tokens. Existing token pruning methods offer solutions. However, approaches operating within the LLM (inner-LLM pruning), such as FastV, incur intrinsic computational overhead in shallow layers. In contrast, methods performing token pruning before the LLM (outer-LLM pruning) primarily address spatial redundancy within individual frames or limited temporal windows, neglecting the crucial global temporal dynamics and correlations across longer video sequences. This leads to sub-optimal spatio-temporal reduction and does not leverage video compressibility fully. Crucially, the synergistic potential and mutual influence of combining these strategies remain unexplored. To further reduce redundancy, we introduce HoliTom, a novel training-free holistic token merging framework. HoliTom employs outer-LLM pruning through global redundancy-aware temporal segmentation, followed by spatial-temporal merging to reduce visual tokens by over 90%, significantly alleviating the LLM's computational burden. Complementing this, we introduce a robust inner-LLM token similarity-based merging approach, designed for superior performance and compatibility with outer-LLM pruning. Evaluations demonstrate our method's promising efficiency-performance trade-off on LLaVA-OneVision-7B, reducing computational costs to 6.9% of FLOPs while maintaining 99.1% of the original performance. Furthermore, we achieve a 2.28x reduction in Time-To-First-Token (TTFT) and a 1.32x acceleration in decoding throughput, highlighting the practical benefits of our integrated pruning approach for efficient video LLMs inference.