🤖 AI Summary
To address the challenges of limited context window capacity and scarce annotated data in long-video visual question answering (VQA), this paper proposes a data-efficient temporal compression and multimodal alignment framework. It compresses videos into 1-fps frame sequences via lightweight temporal pooling strategies—such as motion blur and weighted averaging—and integrates chain-of-thought (CoT) prompting to enhance temporal reasoning. Built upon Qwen-2.5-VL-7B, the method employs a two-stage fine-tuning pipeline: supervised fine-tuning (SFT) followed by direct preference optimization (DPO). On ReasonVQA, it achieves F1=0.543 (+33.1 points absolute gain), BLEU-4=0.291, and ROUGE-L=0.528, demonstrating substantial improvements in temporal evidence aggregation and answer explainability. Moreover, it exhibits strong zero-shot transfer performance on TVQA. Key contributions include a lightweight supervision paradigm leveraging preference optimization and an interpretable, computation-efficient video compression scheme tailored for long-video VQA.
📝 Abstract
Video Question Answering (VQA) with Large Vision Language Models (LVLMs) has gained significant traction in research ever since the Flamingo was introduced by Deepmind. Recent advancements in large context/long video question answering have allowed VQA tasks to have context window of 1500+ frames. However, this only leads to 50 seconds of video footage without losing any significant information. We introduce POVQA, a data-efficient pipeline that compresses each second of video into a single temporally pooled image (via motion blur and weighted averaging variants) and then align LVLMs with lightweight supervision. Concretely, we build 1 fps input sources using Blend Blur with Last Frame, Weighted Average, Exponential and Ramp pooling and fine-tune QWEN-2.5-VL 7B with supervised two turn target including reasoning and final answer. We apply Supervised Fine Tuning (SFT) and Direct Preference Optimization (DPO) on our novel dataset ReasonVQA consisting of 12 movies with 239 human annotated question-answer with reasoning prompts. On our ReasonVQA dataset, this method dramatically improves performance over pooled baselines: F1 score improves from 0.212 to 0.543, BLEU-4 from 0.031 to 0.291, and ROUGE-L from 0.196 to 0.528. Rationale quality also significantly increases. Cross-evaluation of SFT + DPO on various pooling functions show that the gains persist regardless of the pooling scheme used at train or test time, indicating strong robustness on summarization of temporal evidence. Similar observations were made on zero-shot in TVQA.