🤖 AI Summary
In long-video understanding, visual token overload leads to context overflow, while existing frame-level sparse sampling methods impair motion and event reasoning by disrupting temporal continuity. Method: We propose a key-segment selection paradigm—first systematically demonstrating its superiority over key-frame selection—and introduce explicit temporal coherence modeling. Our approach features F2C, a training-free key-segment sampler, coupled with an adaptive resolution strategy that dynamically balances spatial fidelity and temporal coverage under a fixed token budget; it further integrates short-term coherent segment extraction and efficient token allocation. Contribution/Results: On three major long-video benchmarks—Video-MME, LongVideoBench, and MLVU—our method outperforms uniform sampling by 8.1%, 5.6%, and 10.3%, respectively, significantly enhancing long-range temporal reasoning capability.
📝 Abstract
Video Large Language Models (VLMs) have achieved remarkable results on a variety of vision language tasks, yet their practical use is limited by the"needle in a haystack"problem: the massive number of visual tokens produced from raw video frames exhausts the model's context window. Existing solutions alleviate this issue by selecting a sparse set of frames, thereby reducing token count, but such frame-wise selection discards essential temporal dynamics, leading to suboptimal reasoning about motion and event continuity. In this work we systematically explore the impact of temporal information and demonstrate that extending selection from isolated key frames to key clips, which are short, temporally coherent segments, improves video understanding. To maintain a fixed computational budget while accommodating the larger token footprint of clips, we propose an adaptive resolution strategy that dynamically balances spatial resolution and clip length, ensuring a constant token count per video. Experiments on three long-form video benchmarks demonstrate that our training-free approach, F2C, outperforms uniform sampling up to 8.1%, 5.6%, and 10.3% on Video-MME, LongVideoBench and MLVU benchmarks, respectively. These results highlight the importance of preserving temporal coherence in frame selection and provide a practical pathway for scaling Video LLMs to real world video understanding applications. Project webpage is available at https://guangyusun.com/f2c .