Captioning for Text-Video Retrieval via Dual-Group Direct Preference Optimization

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-video retrieval, generic captions exhibit strong generalization but poor discriminability, hindering fine-grained semantic matching; moreover, conventional language generation metrics (e.g., BLEU) misalign with retrieval objectives. To address this, we propose CaRe-DPO, the first framework to directly inject retrieval relevance signals into caption generation optimization. Our method introduces Dual-Group Direct Preference Optimization (DG-DPO), a preference learning algorithm that aligns caption generation with retrieval goals, and incorporates role-aware embedding to explicitly distinguish functional semantics between query text and auxiliary captions. Built upon end-to-end training of multimodal large language models, CaRe-DPO achieves significant improvements on standard benchmarks—including MSR-VTT and ActivityNet—demonstrating the effectiveness and superiority of retrieval-driven caption generation.

Technology Category

Application Category

📝 Abstract
In text-video retrieval, auxiliary captions are often used to enhance video understanding, bridging the gap between the modalities. While recent advances in multi-modal large language models (MLLMs) have enabled strong zero-shot caption generation, we observe that such captions tend to be generic and indistinguishable across visually similar videos, limiting their utility for fine-grained retrieval. Moreover, conventional captioning approaches are typically evaluated using language generation metrics, such as BLEU, which are not typically tailored for retrieval tasks that require making discriminative distinctions between candidates. To address this, we propose $ extbf{CaRe-DPO}$, a retrieval framework that directly optimizes caption generation using retrieval relevance scores. At its core is Dual-Group Direct Preference Optimization (DG-DPO), a novel learning strategy that supervises captioning by modeling preferences across groups of distinct video and caption pairs. In addition, we present an MLLM-based retrieval model that incorporates role-embeddings to better distinguish between textual inputs with different functional roles, such as an auxiliary caption and a text query. Through extensive experiments, we demonstrate that CaRe-DPO significantly enhances retrieval performance by effectively leveraging auxiliary knowledge to generate fine-grained captions for retrieval. Code is available at https://github.com/mlvlab/CaReDPO.
Problem

Research questions and friction points this paper is trying to address.

Generic captions from MLLMs limit fine-grained text-video retrieval
Conventional caption metrics don't align with retrieval discrimination needs
Optimizing caption generation directly using retrieval relevance scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Group Direct Preference Optimization for caption supervision
Retrieval relevance scores directly optimize caption generation
Role-embeddings distinguish functional roles of textual inputs
🔎 Similar Papers
No similar papers found.