Enhancing Visual Reasoning with Autonomous Imagination in Multimodal Large Language Models

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit limited performance on fine-grained visual understanding tasks—such as counting, jigsaw solving, and object arrangement—due to their inability to reliably map complex visual inputs into reasoning-capable textual representations; merely extending reasoning steps fails to overcome this perceptual bottleneck. Method: We propose the “autonomous imagination” paradigm: a training-free, plug-in imagination space enabling MLLMs to dynamically perform visual operations—including attention focusing, feature ignoring, and cross-modal transformations—thereby reformulating visual reasoning as a sequential, closed-loop decision process over imagined scenes. This extends chain-of-thought (CoT) reasoning to non-clue-driven, complex visual tasks for the first time. Contribution/Results: The method is architecture-agnostic and achieves an average 27.4% accuracy gain across new benchmarks in dense counting, jigsaw solving, and object placement, demonstrating the stability and efficacy of multi-step autonomous imagination-based reasoning.

Technology Category

Application Category

📝 Abstract
There have been recent efforts to extend the Chain-of-Thought (CoT) paradigm to Multimodal Large Language Models (MLLMs) by finding visual clues in the input scene, advancing the visual reasoning ability of MLLMs. However, current approaches are specially designed for the tasks where clue finding plays a major role in the whole reasoning process, leading to the difficulty in handling complex visual scenes where clue finding does not actually simplify the whole reasoning task. To deal with this challenge, we propose a new visual reasoning paradigm enabling MLLMs to autonomously modify the input scene to new ones based on its reasoning status, such that CoT is reformulated as conducting simple closed-loop decision-making and reasoning steps under a sequence of imagined visual scenes, leading to natural and general CoT construction. To implement this paradigm, we introduce a novel plug-and-play imagination space, where MLLMs conduct visual modifications through operations like focus, ignore, and transform based on their native reasoning ability without specific training. We validate our approach through a benchmark spanning dense counting, simple jigsaw puzzle solving, and object placement, challenging the reasoning ability beyond clue finding. The results verify that while existing techniques fall short, our approach enables MLLMs to effectively reason step by step through autonomous imagination. Project page: https://future-item.github.io/autoimagine-site.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual-to-textual conversion in MLLMs
Solving perceptual bottlenecks in visual reasoning tasks
Decomposing complex visual inputs into manageable substeps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Closed-loop decomposition of visual-to-textual conversion
Iterative visual input modification for MLLMs
Autonomous imagination enhances visual reasoning
🔎 Similar Papers
No similar papers found.
J
Jingming Liu
State Key Laboratory of CAD&CG, Zhejiang University
Y
Yumeng Li
State Key Laboratory of CAD&CG, Zhejiang University
B
Boyuan Xiao
State Key Laboratory of CAD&CG, Zhejiang University
Y
Yichang Jian
State Key Laboratory of CAD&CG, Zhejiang University
Z
Ziang Qin
State Key Laboratory of CAD&CG, Zhejiang University
Tianjia Shao
Tianjia Shao
University of Leeds
computer graphics
Yao-Xiang Ding
Yao-Xiang Ding
Assistant Professor, Zhejiang University
machine learning
K
Kun Zhou
State Key Laboratory of CAD&CG, Zhejiang University