VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning

📅 2024-12-03
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (LVLMs) exhibit insufficient fine-grained critique and self-correction capabilities in visual reasoning. Method: We introduce VISCO, the first chain-of-thought step-level fine-grained benchmark requiring models to assess the correctness of each reasoning step and generate natural-language justifications. We propose a novel dense fine-grained critique paradigm, identifying three typical critique failure modes, and introduce the LookBack mechanism—revisiting the original image to verify step-wise reasoning premises. Contribution/Results: Empirical evaluation across 24 LVLMs shows that human-provided critiques substantially improve correction performance, whereas model-generated critiques yield limited or even detrimental effects. LookBack boosts critique accuracy by up to 13.5% and significantly enhances downstream correction efficacy, establishing a verifiable pathway for LVLM self-improvement.

Technology Category

Application Category

📝 Abstract
The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the fine-grained critique and correction capabilities of LVLMs. Compared to existing work that uses a single scalar value to critique the entire reasoning [4], VISCO features dense and fine-grained critique, requiring LVLMs to evaluate the correctness of each step in the chain-of-thought and provide natural language explanations to support their judgments. Extensive evaluation of 24 LVLMs demonstrates that human-written critiques significantly enhance the performance after correction, showcasing the potential of the self-improvement strategy. However, the model-generated critiques are less helpful and sometimes detrimental to the performance, suggesting that critique is the crucial bottleneck. We identified three common patterns in critique failures: failure to critique visual perception, reluctance to"say no", and exaggerated assumption of error propagation. To address these issues, we propose an effective LookBack strategy that revisits the image to verify each piece of information in the initial reasoning. LookBack significantly improves critique and correction performance by up to 13.5%.
Problem

Research questions and friction points this paper is trying to address.

Analyzes fine-grained critique and correction in LVLMs.
Identifies critique failures in visual reasoning models.
Proposes LookBack strategy to improve critique performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

VISCO benchmark analyzes LVLM critique and correction.
LookBack strategy verifies image information for accuracy.
Dense critique evaluates each reasoning step in detail.
🔎 Similar Papers
No similar papers found.
Xueqing Wu
Xueqing Wu
University of California, Los-Angeles
Yuheng Ding
Yuheng Ding
Peking University
Deep LearningAI4ScienceComputer Science
Bingxuan Li
Bingxuan Li
UIUC
P
Pan Lu
Stanford
Da Yin
Da Yin
Meta FAIR
Natural Language Processing
K
Kai-Wei Chang
University of California, Los-Angeles
N
Nanyun Peng
University of California, Los-Angeles