🤖 AI Summary
This work addresses three key challenges hindering multimodal large language models (MLLMs) in complex 3D object arrangement tasks: weak 3D vision–language alignment, insufficient spatial reasoning capability, and poor robustness to iterative editing. We propose the first framework for fault-tolerant, multi-step 3D scene manipulation. Methodologically: (1) we introduce a function-level 3D editing API grounded in the Multi-Command Protocol (MCP); (2) we design a tool-augmented, three-role multi-agent system—comprising Planner, Executor, and Verifier—for collaborative task decomposition and execution; and (3) we incorporate a closed-loop perceptual feedback mechanism to enable iterative scene-state verification. Evaluated on 25 high-complexity 3D arrangement tasks, our approach significantly outperforms existing baselines, enabling high-precision, recoverable, and multi-turn interactive 3D manipulation. This work establishes a novel paradigm for extending MLLMs toward embodied intelligence and comprehensive 3D world understanding.
📝 Abstract
Despite the remarkable progress of Multimodal Large Language Models (MLLMs) in 2D vision-language tasks, their application to complex 3D scene manipulation remains underexplored. In this paper, we bridge this critical gap by tackling three key challenges in 3D object arrangement task using MLLMs. First, to address the weak visual grounding of MLLMs, which struggle to link programmatic edits with precise 3D outcomes, we introduce an MCP-based API. This shifts the interaction from brittle raw code manipulation to more robust, function-level updates. Second, we augment the MLLM's 3D scene understanding with a suite of specialized visual tools to analyze scene state, gather spatial information, and validate action outcomes. This perceptual feedback loop is critical for closing the gap between language-based updates and precise 3D-aware manipulation. Third, to manage the iterative, error-prone updates, we propose a collaborative multi-agent framework with designated roles for planning, execution, and verification. This decomposition allows the system to robustly handle multi-step instructions and recover from intermediate errors. We demonstrate the effectiveness of our approach on a diverse set of 25 complex object arrangement tasks, where it significantly outperforms existing baselines. Website: vulcan-3d.github.io