Bridging the Visual Gap: Fine-Tuning Multimodal Models with Knowledge-Adapted Captions

📅 2024-11-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small-scale vision-language models (≤7B) suffer from hallucination and struggle to balance descriptive richness with factual accuracy during long visual-textual fine-tuning. To address this, we propose KnowAda, a knowledge-adaptive fine-tuning paradigm. Our key contributions are: (1) the first Decomposed Natural Language Inference (DNLI) framework for fine-grained hallucination quantification; (2) a confidence-based dynamic caption pruning and enhancement mechanism; and (3) knowledge distillation-guided data reweighting coupled with vision–language joint embedding alignment. Evaluated on multiple densely annotated datasets, KnowAda significantly outperforms baselines across automatic metrics—BLEU-4, SPICE, and CLIPScore—all improve substantially. Human evaluation shows a 23.6% increase in description accuracy and a 38.1% reduction in hallucination rate, demonstrating superior fidelity and expressiveness.

Technology Category

Application Category

📝 Abstract
Recent research increasingly focuses on training vision-language models (VLMs) with long, detailed image captions. However, small-scale VLMs often struggle to balance the richness of these captions with the risk of hallucinating content during fine-tuning. In this paper, we explore how well VLMs adapt to such captions. To quantify caption quality, we propose Decomposed NLI (DNLI), an evaluation framework that breaks down generated captions into individual propositions, assessing each in isolation. This fine-grained analysis reveals a critical balance between capturing descriptive details and preventing hallucinations. Our findings show that simply reducing caption complexity or employing standard data curation techniques does not effectively resolve this issue. To tackle this challenge, we introduce Knowledge Adapted (KnowAda) fine-tuning, a data-centric approach that automatically adapts training data with the model's existing knowledge and visual understanding. KnowAda minimizes hallucinations while preserving high descriptiveness. We validate this approach across several small-scale VLMs (up to 7B parameters) and dense caption datasets, demonstrating that KnowAda effectively balances hallucination reduction and descriptiveness. Our results show that KnowAda outperforms various baselines in both automatic metrics and human evaluations. We will release our code and models.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Detail Preservation
Training Authenticity
Innovation

Methods, ideas, or system contributions that make the work stand out.

KnowAda
Automatic Data Adjustment
Visual Language Model Improvement
🔎 Similar Papers
No similar papers found.