Latent Visual Reasoning

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) confine reasoning to the linguistic space, treating visual inputs as static premises—thus impeding fine-grained, dynamic reasoning within the visual embedding space. To address this, we propose Latent Visual Reasoning (LVR), the first framework to extend autoregressive reasoning to the visual token level: it employs the language model to generate latent representations of salient visual tokens, enabling generative visual reasoning in a unified semantic space. Our method integrates a vision encoder, latent-state reconstruction training, and GRPO-based reinforcement learning to jointly optimize visual latent modeling and text generation. Evaluated on the perception-intensive visual question answering benchmark MMVP, LVR achieves 71.67% accuracy—significantly surpassing Qwen2.5-VL (66.67%)—demonstrating the efficacy and advancement of cross-modal latent-variable reasoning.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved notable gains in various tasks by incorporating Chain-of-Thought (CoT) reasoning in language spaces. Recent work extends this direction by leveraging external tools for visual editing, thereby enhancing the visual signal along the reasoning trajectories. Nevertheless, these approaches remain fundamentally constrained: reasoning is still confined to the language space, with visual information treated as static preconditions. We introduce Latent Visual Reasoning (LVR), a new paradigm that enables autoregressive reasoning directly in the visual embedding space. A visual encoder first projects images into visual tokens within a joint semantic space shared with the language model. The language model is then trained to generate latent states that reconstruct key visual tokens critical for answering the query, constituting the process of latent visual reasoning. By interleaving LVR with standard text generation, our model achieves substantial gains on perception-intensive visual question answering tasks. In addition, we adapt the GRPO algorithm to conduct reinforcement learning on latent reasoning, further balancing LVR and textual generation. We show that LVR substantially improves fine-grained visual understanding and perception, achieving 71.67% on MMVP compared to 66.67% with Qwen2.5-VL. Code base and model weights will be released later.
Problem

Research questions and friction points this paper is trying to address.

Enabling autoregressive reasoning in visual embedding space
Overcoming language-space limitations in multimodal reasoning
Improving fine-grained visual understanding through latent states
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive reasoning in visual embedding space
Visual encoder projects images into shared semantic tokens
Reinforcement learning balances latent reasoning and text generation
🔎 Similar Papers
No similar papers found.
Bangzheng Li
Bangzheng Li
University of California, Davis
Natural Language Processing
X
Ximeng Sun
Advanced Micro Devices, Inc.
J
Jiang Liu
Advanced Micro Devices, Inc.
Z
Ze Wang
Advanced Micro Devices, Inc.
Jialian Wu
Jialian Wu
AMD GenAI
LLMComputer Vision
X
Xiaodong Yu
Advanced Micro Devices, Inc.
H
Hao Chen
Advanced Micro Devices, Inc.
Emad Barsoum
Emad Barsoum
AMD, Columbia University
Generative AIFoundation ModelsAgentic AIComputer VisionML Frameworks
Muhao Chen
Muhao Chen
Assistant Professor of Computer Science, University of California, Davis
Natural Language ProcessingRobust MLAI SafetyVision-language Models
Z
Zicheng Liu
Advanced Micro Devices, Inc.