🤖 AI Summary
This work addresses the tendency of large vision-language models to rely excessively on linguistic priors at the expense of visual evidence. To mitigate this bias, the authors propose a novel metric called Visual Information Gain (VIG), which quantifies—via perplexity—the reduction in prediction uncertainty attributable to visual input, enabling fine-grained, sample- and token-level analysis. Leveraging VIG, they introduce a selective training strategy that prioritizes high-VIG data, thereby reducing redundant supervision and strengthening the model’s visual grounding. Experimental results demonstrate that this approach effectively alleviates language bias and achieves superior performance even when training exclusively on samples and tokens exhibiting high visual information gain.
📝 Abstract
Large Vision Language Models (LVLMs) have achieved remarkable progress, yet they often suffer from language bias, producing answers without relying on visual evidence. While prior work attempts to mitigate this issue through decoding strategies, architectural modifications, or curated instruction data, they typically lack a quantitative measure of how much individual training samples or tokens actually benefit from the image. In this work, we introduce Visual Information Gain (VIG), a perplexity-based metric that measures the reduction in prediction uncertainty provided by visual input. VIG enables fine-grained analysis at both sample and token levels, effectively highlighting visually grounded elements such as colors, spatial relations, and attributes. Leveraging this, we propose a VIG-guided selective training scheme that prioritizes high-VIG samples and tokens. This approach improves visual grounding and mitigates language bias, achieving superior performance with significantly reduced supervision by focusing exclusively on visually informative samples and tokens.