🤖 AI Summary
Multimodal large language models (MLLMs) exhibit limited performance on fine-grained visual understanding tasks—such as counting, jigsaw solving, and object arrangement—due to their inability to reliably map complex visual inputs into reasoning-capable textual representations; merely extending reasoning steps fails to overcome this perceptual bottleneck.
Method: We propose the “autonomous imagination” paradigm: a training-free, plug-in imagination space enabling MLLMs to dynamically perform visual operations—including attention focusing, feature ignoring, and cross-modal transformations—thereby reformulating visual reasoning as a sequential, closed-loop decision process over imagined scenes. This extends chain-of-thought (CoT) reasoning to non-clue-driven, complex visual tasks for the first time.
Contribution/Results: The method is architecture-agnostic and achieves an average 27.4% accuracy gain across new benchmarks in dense counting, jigsaw solving, and object placement, demonstrating the stability and efficacy of multi-step autonomous imagination-based reasoning.
📝 Abstract
There have been recent efforts to extend the Chain-of-Thought (CoT) paradigm to Multimodal Large Language Models (MLLMs) by finding visual clues in the input scene, advancing the visual reasoning ability of MLLMs. However, current approaches are specially designed for the tasks where clue finding plays a major role in the whole reasoning process, leading to the difficulty in handling complex visual scenes where clue finding does not actually simplify the whole reasoning task. To deal with this challenge, we propose a new visual reasoning paradigm enabling MLLMs to autonomously modify the input scene to new ones based on its reasoning status, such that CoT is reformulated as conducting simple closed-loop decision-making and reasoning steps under a sequence of imagined visual scenes, leading to natural and general CoT construction. To implement this paradigm, we introduce a novel plug-and-play imagination space, where MLLMs conduct visual modifications through operations like focus, ignore, and transform based on their native reasoning ability without specific training. We validate our approach through a benchmark spanning dense counting, simple jigsaw puzzle solving, and object placement, challenging the reasoning ability beyond clue finding. The results verify that while existing techniques fall short, our approach enables MLLMs to effectively reason step by step through autonomous imagination. Project page: https://future-item.github.io/autoimagine-site.