Learning to Rank Caption Chains for Video-Text Alignment

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing binary preference optimization methods in video–text alignment tasks, which struggle to accurately assess generated captions that are suboptimal in preference yet high in visual fidelity. To overcome this, the authors propose a ranking-based alignment approach that constructs large-scale, fully ordered caption chains through a repeated caption degradation strategy, enabling finer-grained modeling of the faithfulness gradient between text and visual content. Instead of conventional binary preference learning, they adopt a ranking optimization objective to jointly fine-tune both the vision encoder and the language model. Experimental results demonstrate that the proposed method significantly outperforms standard Direct Preference Optimization (DPO) on long-form caption generation and evaluation tasks, highlighting the critical role of vision encoder fine-tuning in enhancing video–text alignment performance.

Technology Category

Application Category

📝 Abstract
Direct preference optimization (DPO) is an effective technique to train language models to generate preferred over dispreferred responses. However, this binary "winner-takes-all" approach is suboptimal for vision-language models whose response quality is highly dependent on visual content. In particular, a response may still be faithful to the visual inputs even if it is less preferable than an alternative. The standard Bradley-Terry DPO formulation lacks this nuance, upweighting winning responses without sufficient regard for whether the "losing" response still maintains high visual fidelity. In this work, we investigate ranking optimization as an alternative that more precisely situates responses' faithfulness to visual inputs. We focus on video-text alignment using detailed video captions, proposing a method to generate challenging, totally ordered caption chains at scale through repeated caption degradation. Our results show ranking optimization outperforms binary DPO for long-form content generation and assessment, and importantly, we find that these approaches require finetuning of the vision encoder to be effective, challenging the view of DPO as purely a language-reweighting process.
Problem

Research questions and friction points this paper is trying to address.

video-text alignment
learning to rank
direct preference optimization
vision-language models
caption faithfulness
Innovation

Methods, ideas, or system contributions that make the work stand out.

ranking optimization
video-text alignment
caption chains
direct preference optimization
vision-language models
🔎 Similar Papers
No similar papers found.
A
Ansel Blume
University of Illinois Urbana-Champaign
Burak Uzkent
Burak Uzkent
Amazon, Stanford University
Computer VisionComputational SustainabilityAudio ProcessingNLP
S
Shalini Chaudhuri
Amazon Prime Video
G
Garin Kessler
Amazon Prime Video