🤖 AI Summary
Large vision-language models (LVLMs) exhibit insufficient fine-grained critique and self-correction capabilities in visual reasoning. Method: We introduce VISCO, the first chain-of-thought step-level fine-grained benchmark requiring models to assess the correctness of each reasoning step and generate natural-language justifications. We propose a novel dense fine-grained critique paradigm, identifying three typical critique failure modes, and introduce the LookBack mechanism—revisiting the original image to verify step-wise reasoning premises. Contribution/Results: Empirical evaluation across 24 LVLMs shows that human-provided critiques substantially improve correction performance, whereas model-generated critiques yield limited or even detrimental effects. LookBack boosts critique accuracy by up to 13.5% and significantly enhances downstream correction efficacy, establishing a verifiable pathway for LVLM self-improvement.
📝 Abstract
The ability of large vision-language models (LVLMs) to critique and correct their reasoning is an essential building block towards their self-improvement. However, a systematic analysis of such capabilities in LVLMs is still lacking. We propose VISCO, the first benchmark to extensively analyze the fine-grained critique and correction capabilities of LVLMs. Compared to existing work that uses a single scalar value to critique the entire reasoning [4], VISCO features dense and fine-grained critique, requiring LVLMs to evaluate the correctness of each step in the chain-of-thought and provide natural language explanations to support their judgments. Extensive evaluation of 24 LVLMs demonstrates that human-written critiques significantly enhance the performance after correction, showcasing the potential of the self-improvement strategy. However, the model-generated critiques are less helpful and sometimes detrimental to the performance, suggesting that critique is the crucial bottleneck. We identified three common patterns in critique failures: failure to critique visual perception, reluctance to"say no", and exaggerated assumption of error propagation. To address these issues, we propose an effective LookBack strategy that revisits the image to verify each piece of information in the initial reasoning. LookBack significantly improves critique and correction performance by up to 13.5%.