🤖 AI Summary
Existing token pruning methods in video large language models often induce representational distortion due to their neglect of attention distribution diversity and reliance on similarity-based clustering. To address this limitation, this work proposes Tango, a novel framework that integrates a diversity-driven attention token selection mechanism with spatiotemporal rotary position encoding (ST-RoPE). This approach effectively preserves multimodal spatial attention distributions and geometric structures during pruning, thereby mitigating cluster fragmentation. Experimental results demonstrate that Tango retains 98.9% of the original LLaVA-OV performance while using only 10% of the video tokens, achieving a 1.88× speedup in inference.
📝 Abstract
Token pruning has emerged as a mainstream approach for developing efficient Video Large Language Models (Video LLMs). This work revisits and advances the two predominant token-pruning paradigms: attention-based selection and similarity-based clustering. Our study reveals two critical limitations in existing methods: (1) conventional top-k selection strategies fail to fully account for the attention distribution, which is often spatially multi-modal and long-tailed in magnitude; and (2) direct similarity-based clustering frequently generates fragmented clusters, resulting in distorted representations after pooling. To address these bottlenecks, we propose Tango, a novel framework designed to optimize the utilization of visual signals. Tango integrates a diversity-driven strategy to enhance attention-based token selection, and introduces Spatio-temporal Rotary Position Embedding (ST-RoPE) to preserve geometric structure via locality priors. Comprehensive experiments across various Video LLMs and video understanding benchmarks demonstrate the effectiveness and generalizability of our approach. Notably, when retaining only 10% of the video tokens, Tango preserves 98.9% of the original performance on LLaVA-OV while delivering a 1.88x inference speedup.