🤖 AI Summary
Existing vision-language models (VLMs) suffer from low inference efficiency, frequent hallucination, and insufficient generation confidence in image captioning. To address these challenges, we propose ViMaR, a two-stage value-guided reasoning framework. In Stage I, a temporal-difference value model performs single-pass forward inference to select high-value candidates; in Stage II, only visually grounded fragments undergo fine-grained decoding. We introduce two key innovations: interval-aware reward adjustment and a margin-based dynamic penalty mechanism—enabling zero-shot generalization across architectures (e.g., LLaVA-Mistral → LLaVA-OneVision) and lightweight adaptation. Integrated with efficient self-training, ViMaR achieves over 4× inference speedup across multiple VLMs, significantly improving factual consistency, detail fidelity, and interpretability of generated captions, while also enhancing the underlying model’s visual understanding capability.
📝 Abstract
Despite significant advances in inference-time search for vision-language models (VLMs), existing approaches remain both computationally expensive and prone to unpenalized, low-confidence generations which often lead to persistent hallucinations. We introduce extbf{Value-guided Inference with Margin-based Reward (ViMaR)}, a two-stage inference framework that improves both efficiency and output fidelity by combining a temporal-difference value model with a margin-aware reward adjustment. In the first stage, we perform a single pass to identify the highest-value caption among diverse candidates. In the second stage, we selectively refine only those segments that were overlooked or exhibit weak visual grounding, thereby eliminating frequently rewarded evaluations. A calibrated margin-based penalty discourages low-confidence continuations while preserving descriptive richness. Extensive experiments across multiple VLM architectures demonstrate that ViMaR generates captions that are significantly more reliable, factually accurate, detailed, and explanatory, while achieving over 4$ imes$ speedup compared to existing value-guided methods. Specifically, we show that ViMaR trained solely on LLaVA Mistral-7B, extit{generalizes effectively to guide decoding in a stronger unseen model}. To further validate this, we adapt the ViMaR to steer generation in LLaVA-OneVision-Qwen2-7B, leading to consistent improvements in caption quality and demonstrating robust cross-model guidance. This cross-model generalization highlights ViMaR's flexibility and modularity, positioning it as a scalable and transferable inference-time decoding strategy. Furthermore, when ViMaR-generated captions are used for self-training, the underlying models achieve substantial gains across a broad suite of visual comprehension benchmarks, underscoring the potential of fast, accurate, and self-improving VLM pipelines.