🤖 AI Summary
Current large language models (LLMs) lack dynamic interaction and embodied cognition capabilities in multimodal reasoning. To address this, this paper systematically surveys recent advances and proposes a dual-paradigm framework—“language-centric” and “cooperative”—for multimodal reasoning. Methodologically, it integrates vision-language understanding, active visual perception, action-driven reasoning, and state modeling into a unified technical pathway supporting full-modality input and embodied behavior generation. It also introduces the first comprehensive taxonomy and benchmark task map covering perception, reasoning, action, and state updating in multimodal reasoning. Contributions include: (1) a precise delineation of the boundaries between the two paradigms; (2) an evolutionary roadmap from vision-language reasoning toward fully multimodal agents; and (3) an evaluable theoretical framework and technical guidance for embodied intelligence and general multimodal cognition.
📝 Abstract
Language models have recently advanced into the realm of reasoning, yet it is through multimodal reasoning that we can fully unlock the potential to achieve more comprehensive, human-like cognitive capabilities. This survey provides a systematic overview of the recent multimodal reasoning approaches, categorizing them into two levels: language-centric multimodal reasoning and collaborative multimodal reasoning. The former encompasses one-pass visual perception and active visual perception, where vision primarily serves a supporting role in language reasoning. The latter involves action generation and state update within reasoning process, enabling a more dynamic interaction between modalities. Furthermore, we analyze the technical evolution of these methods, discuss their inherent challenges, and introduce key benchmark tasks and evaluation metrics for assessing multimodal reasoning performance. Finally, we provide insights into future research directions from the following two perspectives: (i) from visual-language reasoning to omnimodal reasoning and (ii) from multimodal reasoning to multimodal agents. This survey aims to provide a structured overview that will inspire further advancements in multimodal reasoning research.