🤖 AI Summary
Current multimodal large language models (MLLMs) heavily rely on visual recognition for unlabeled chart understanding, lacking fundamental visual reasoning capabilities—such as numerical estimation—leading to frequent hallucinations and poor generalization. To address this, we introduce CRBench, the first benchmark dedicated to visual reasoning over charts, and propose ChartReasoner: a framework integrating visual grounding modeling, progressive numerical estimation, and instruction tuning to enable controllable, stepwise reasoning guidance. ChartReasoner achieves substantial reasoning enhancement using only lightweight 3B/7B models, significantly outperforming GPT-4o and Gemini-2.5-Flash on CRBench. It improves average performance across general chart understanding tasks by 12.7%, effectively mitigates hallucinations, and—crucially—systematically identifies and bridges a core deficiency in MLLMs’ chart-based visual reasoning capability for the first time.
📝 Abstract
Although Multimodal Large Language Models (MLLMs) have demonstrated increasingly impressive performance in chart understanding, most of them exhibit alarming hallucinations and significant performance degradation when handling non-annotated charts. Therefore, a question arises: Do MLLMs really understand the charts? Since a human is capable of understanding charts and estimating the values by visual reasoning, we first carefully establish a comprehensive Chart Reasoning Benchmark CRBench to rigorously evaluate the visual reasoning abilities of MLLMs on non-annotated charts. We argue that MLLMs are primarily relying on recognition rather than reasoning to interpret the charts. To steer MLLMs to reasonable chart understanding, we propose ChartReasoner that mimics human behavior by grounding their estimation in chart understanding. Extensive results on the proposed CRBench show that ChartReasnoner-3B/7B achieves superior performance in chart reasoning, even compared to GPT-4o and Gemini-2.5-Flash. More importantly, ChartReasnoner also demonstrates the visual reasoning abilities in general chart comprehension on public benchmarks, leading to significant performance gains and enabling MLLMs to rationally understand the charts. The code and dataset will be publicly available upon publication.