Beyond Accuracy: Evaluating Grounded Visual Evidence in Thinking with Images

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of current vision-language model (VLM) evaluations, which predominantly focus on answer accuracy while neglecting whether the reasoning process genuinely relies on fine-grained visual evidence. To this end, we propose ViEBench—the first evaluation framework designed for process-verifiable visual reasoning—comprising a dataset of 200 high-resolution images annotated with expert-verified visual evidence. We introduce a dual-axis evaluation matrix and a four-quadrant diagnostic mechanism based on perceptual and reasoning difficulty. This benchmark effectively uncovers hidden failures such as “correct answers derived from flawed reasoning” and precisely identifies misalignments between visual evidence grounding and logical inference, thereby offering a more reliable, transparent, and interpretable evaluation standard for VLMs in embodied intelligence applications.

Technology Category

Application Category

📝 Abstract
Despite the remarkable progress of Vision-Language Models (VLMs) in adopting"Thinking-with-Images"capabilities, accurately evaluating the authenticity of their reasoning process remains a critical challenge. Existing benchmarks mainly rely on outcome-oriented accuracy, lacking the capability to assess whether models can accurately leverage fine-grained visual cues for multi-step reasoning. To address these limitations, we propose ViEBench, a process-verifiable benchmark designed to evaluate faithful visual reasoning. Comprising 200 multi-scenario high-resolution images with expert-annotated visual evidence, ViEBench uniquely categorizes tasks by difficulty into perception and reasoning dimensions, where reasoning tasks require utilizing localized visual details with prior knowledge. To establish comprehensive evaluation criteria, we introduce a dual-axis matrix that provides fine-grained metrics through four diagnostic quadrants, enabling transparent diagnosis of model behavior across varying task complexities. Our experiments yield several interesting observations: (1) VLMs can sometimes produce correct final answers despite grounding on irrelevant regions, and (2) they may successfully locate the correct evidence but still fail to utilize it to reach accurate conclusions. Our findings demonstrate that ViEBench can serve as a more explainable and practical benchmark for comprehensively evaluating the effectiveness agentic VLMs. The codes will be released at: https://github.com/Xuchen-Li/ViEBench.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Visual Reasoning
Grounded Evidence
Process Evaluation
Benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

faithful reasoning
visual evidence grounding
process-verifiable benchmark
vision-language models
dual-axis evaluation
🔎 Similar Papers
No similar papers found.