VaPR -- Vision-language Preference alignment for Reasoning

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language preference fine-tuning methods (e.g., DPO) overlook noise in synthetic preference data—such as stylistic and length biases—limiting alignment with human preferences. This paper proposes VaPR, a novel framework that leverages open-source LLMs to guide response editing, generating targeted hard negative examples to construct high-quality multimodal preference datasets. Crucially, VaPR introduces a hard-negative-driven contrastive learning mechanism that effectively mitigates dataset biases and substantially reduces the “Yes” bias. Applying direct preference optimization atop VaPR yields significant improvements across 10 benchmarks: average gains of 6.5%, 4.0%, and 1.5% on LLaVA, Qwen2-VL, and Qwen2.5-VL, respectively—with performance gains scaling consistently with data volume. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Preference finetuning methods like Direct Preference Optimization (DPO) with AI-generated feedback have shown promise in aligning Large Vision-Language Models (LVLMs) with human preferences. However, existing techniques overlook the prevalence of noise in synthetic preference annotations in the form of stylistic and length biases. To this end, we introduce a hard-negative response generation framework based on LLM-guided response editing, that produces rejected responses with targeted errors, maintaining stylistic and length similarity to the accepted ones. Using this framework, we develop the VaPR dataset, comprising 30K high-quality samples, to finetune three LVLM families: LLaVA-V1.5, Qwen2VL & Qwen2.5VL (2B-13B sizes). Our VaPR models deliver significant performance improvements across ten benchmarks, achieving average gains of 6.5% (LLaVA), 4.0% (Qwen2VL), and 1.5% (Qwen2.5VL), with notable improvements on reasoning tasks. A scaling analysis shows that performance consistently improves with data size, with LLaVA models benefiting even at smaller scales. Moreover, VaPR reduces the tendency to answer "Yes" in binary questions - addressing a common failure mode in LVLMs like LLaVA. Lastly, we show that the framework generalizes to open-source LLMs as editors, with models trained on VaPR-OS achieving ~99% of the performance of models trained on ame, which is synthesized using GPT-4o. Our data, models, and code can be found on the project page https://vap-r.github.io
Problem

Research questions and friction points this paper is trying to address.

Addressing noise in synthetic preference annotations for vision-language models
Reducing stylistic and length biases in AI-generated preference feedback
Improving reasoning performance and reducing yes-bias in LVLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates hard-negative responses using LLM-guided editing
Creates rejected responses with targeted stylistic similarity
Produces VaPR dataset for finetuning vision-language models
🔎 Similar Papers
No similar papers found.
R
Rohan Wadhawan
Department of Computer Science, University of California Los Angeles, USA
F
Fabrice Y Harel-Canada
Department of Computer Science, University of California Los Angeles, USA
Zi-Yi Dou
Zi-Yi Dou
University of California, Los Angeles
Natural Language ProcessingMachine LearningComputer Vision
S
Suhaila Shakiah
Amazon.com, Inc., USA
Robinson Piramuthu
Robinson Piramuthu
Amazon AGI
Computer VisionInformation Theory
N
Nanyun Peng
Department of Computer Science, University of California Los Angeles, USA