M$^3$-ACE: Rectifying Visual Perception in Multimodal Math Reasoning via Multi-Agentic Context Engineering

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the frequent failure of multimodal large language models in visual mathematical reasoning due to inaccurate or incomplete visual perception, which is difficult to correct with conventional methods. To this end, the authors propose the M³-ACE framework, which introduces a novel perception-centric multi-agent collaboration mechanism that decouples perception from reasoning and dynamically maintains a shared context centered on visual evidence. Leveraging lightweight tools—including multi-agent context engineering, a Summary Tool for evidence condensation, and a Refine Tool for sample filtering and guided correction—the framework enables conflict detection and iterative refinement. Experiments demonstrate that M³-ACE achieves a new state-of-the-art accuracy of 89.1% on MathVision and consistently improves performance across other benchmarks such as MathVista and MathVerse.

Technology Category

Application Category

📝 Abstract
Multimodal large language models have recently shown promising progress in visual mathematical reasoning. However, their performance is often limited by a critical yet underexplored bottleneck: inaccurate visual perception. Through systematic analysis, we find that the most failures originate from incorrect or incomplete visual evidence extraction rather than deficiencies in reasoning capability. Moreover, models tend to remain overly confident in their initial perceptions, making standard strategies such as prompt engineering, multi-round self-reflection, or posterior guidance insufficient to reliably correct errors. To address this limitation, we propose M3-ACE, a multi-agentic context engineering framework designed to rectify visual perception in multimodal math reasoning. Instead of directly aggregating final answers, our approach decouples perception and reasoning by dynamically maintaining a shared context centered on visual evidence lists. Multiple agents collaboratively contribute complementary observations, enabling the system to expose inconsistencies and recover missing perceptual information. To support stable multi-turn collaboration, we further introduce two lightweight tools: a Summary Tool that organizes evidence from different agents into consistent, complementary, and conflicting components, and a Refine Tool that filters unreliable samples and guides iterative correction. Extensive experiments demonstrate that M3-ACE substantially improves visual mathematical reasoning performance across multiple benchmarks. Our method establishes new state-of-the-art results 89.1 on the MathVision benchmark and achieves consistent improvements on other related datasets, including MathVista and MathVerse. These results highlight the importance of perception-centric multi-agent collaboration for advancing multimodal reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

visual perception
multimodal math reasoning
perceptual errors
visual evidence extraction
multimodal large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent collaboration
visual perception rectification
context engineering
multimodal reasoning
evidence refinement
P
Peijin Xie
ITNLP Lab, Harbin Institute of Technology
Z
Zhen Xu
Platform and Content Group, Tencent
B
Bingquan Liu
ITNLP Lab, Harbin Institute of Technology
Baoxun Wang
Baoxun Wang
PCG, Tencent
Natural Language ProcessingDeep LearningChat-Bot