π€ AI Summary
Existing video understanding benchmarks suffer from simplistic content, limited diversity, and data leakage, hindering long-horizon, story-level reasoning. To address this, we introduce SF20Kβa large-scale, leakage-free benchmark comprising 20,143 publicly available amateur short films, specifically designed for story-level comprehension. SF20K supports both multiple-choice and open-ended question answering, emphasizing long-range causal modeling and narrative coherence. Methodologically, we propose an instruction-tuning framework for vision-language models that integrates multimodal feature alignment with long-sequence attention mechanisms. Experiments reveal that SF20K effectively exposes critical limitations of state-of-the-art VLMs in long-horizon reasoning. Instruction tuning yields an average accuracy improvement of 12.7% on SF20K, establishing a new, reproducible benchmark and technical pathway for story-level video understanding.
π Abstract
Recent developments in vision-language models have significantly advanced video understanding. Existing datasets and tasks, however, have notable limitations. Most datasets are confined to short videos with limited events and narrow narratives. For example, datasets with instructional and egocentric videos often depict activities of one person in a single scene. Although existing movie datasets offer richer content, they are often limited to short-term tasks, lack publicly available videos, and frequently encounter data leakage issues given the use of subtitles and other information about commercial movies during LLM pretraining. To address the above limitations, we propose Short-Films 20K (SF20K), the largest publicly available movie dataset. SF20K is composed of 20,143 amateur films and offers long-term video tasks in the form of multiple-choice and open-ended question answering. Our extensive analysis of SF20K reveals minimal data leakage, emphasizes the need for long-term reasoning, and demonstrates the strong performance of recent VLMs. Finally, we show that instruction tuning on the SF20K-Train set substantially improves model performance, paving the way for future progress in long-term video understanding.