Bridging Video Quality Scoring and Justification via Large Multimodal Models

πŸ“… 2025-06-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional video quality assessment (VQA) methods produce only a single scalar score, failing to capture multidimensional quality attributes and lacking interpretability. To address this, we propose the Score-based Instruction Generation (SIG) framework, which integrates large multimodal video models with hierarchical Chain-of-Thought reasoning to jointly generate both quantitative quality scores and natural-language explanations. We introduce S2I, the first large-scale instruction dataset comprising over 320,000 video-quality instruction pairs, and establish S2I-Benchβ€”a dedicated benchmark for evaluating explainable VQA. Leveraging score-driven instruction auto-generation, progressive fine-tuning, and hierarchical reasoning, SIG achieves significant improvements in score accuracy and explanation plausibility on S2I-Bench and multiple mainstream benchmarks. This work advances VQA from opaque, black-box scoring toward transparent, interpretable intelligent assessment.

Technology Category

Application Category

πŸ“ Abstract
Classical video quality assessment (VQA) methods generate a numerical score to judge a video's perceived visual fidelity and clarity. Yet, a score fails to describe the video's complex quality dimensions, restricting its applicability. Benefiting from the linguistic output, adapting video large multimodal models (LMMs) to VQA via instruction tuning has the potential to address this issue. The core of the approach lies in the video quality-centric instruction data. Previous explorations mainly focus on the image domain, and their data generation processes heavily rely on human quality annotations and proprietary systems, limiting data scalability and effectiveness. To address these challenges, we propose the Score-based Instruction Generation (SIG) pipeline. Specifically, SIG first scores multiple quality dimensions of an unlabeled video and maps scores to text-defined levels. It then explicitly incorporates a hierarchical Chain-of-Thought (CoT) to model the correlation between specific dimensions and overall quality, mimicking the human visual system's reasoning process. The automated pipeline eliminates the reliance on expert-written quality descriptions and proprietary systems, ensuring data scalability and generation efficiency. To this end, the resulting Score2Instruct (S2I) dataset contains over 320K diverse instruction-response pairs, laying the basis for instruction tuning. Moreover, to advance video LMMs' quality scoring and justification abilities simultaneously, we devise a progressive tuning strategy to fully unleash the power of S2I. Built upon SIG, we further curate a benchmark termed S2I-Bench with 400 open-ended questions to better evaluate the quality justification capacity of video LMMs. Experimental results on the S2I-Bench and existing benchmarks indicate that our method consistently improves quality scoring and justification capabilities across multiple video LMMs.
Problem

Research questions and friction points this paper is trying to address.

Bridging video quality scoring and justification via multimodal models
Automating quality instruction data generation without human annotations
Enhancing video LMMs' scoring and justification abilities simultaneously
Innovation

Methods, ideas, or system contributions that make the work stand out.

Score-based Instruction Generation pipeline automates data
Hierarchical Chain-of-Thought models quality correlations
Progressive tuning enhances scoring and justification
πŸ”Ž Similar Papers
No similar papers found.
Q
Qizhi Xie
Tsinghua University, Kuaishou Technology
K
Kun Yuan
Kuaishou Technology
Yunpeng Qu
Yunpeng Qu
Tsinghua University
J
Jiachao Gong
Kuaishou Technology
M
Mingda Wu
Kuaishou Technology
M
Ming Sun
Kuaishou Technology
C
Chao Zhou
Kuaishou Technology
J
Jihong Zhu
Tsinghua University