AffordBot: 3D Fine-grained Embodied Reasoning via Multimodal Large Language Models

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-grained embodied reasoning remains insufficient in physical human–robot collaboration, particularly for grounding natural language instructions in 3D environments and inferring precise, physically plausible interactions. Method: We introduce “fine-grained 3D embodied reasoning”—a novel task requiring models to localize referent interactive elements in 3D point clouds and jointly predict their spatial pose, motion type, and motion axis. To address this, we propose an instruction-driven, chain-of-multimodal-reasoning framework that integrates multimodal large language models with a customized chain-of-thought module; it achieves efficient 3D-to-2D alignment via surround-view rendering and 3D candidate projection. Contribution/Results: Our method achieves state-of-the-art performance on the SceneFun3D benchmark, demonstrating strong generalization and physically consistent motion inference using only raw 3D input—without RGB or depth priors. It establishes an interpretable, executable paradigm for fine-grained interaction understanding in embodied intelligence.

Technology Category

Application Category

📝 Abstract
Effective human-agent collaboration in physical environments requires understanding not only what to act upon, but also where the actionable elements are and how to interact with them. Existing approaches often operate at the object level or disjointedly handle fine-grained affordance reasoning, lacking coherent, instruction-driven grounding and reasoning. In this work, we introduce a new task: Fine-grained 3D Embodied Reasoning, which requires an agent to predict, for each referenced affordance element in a 3D scene, a structured triplet comprising its spatial location, motion type, and motion axis, based on a task instruction. To solve this task, we propose AffordBot, a novel framework that integrates Multimodal Large Language Models (MLLMs) with a tailored chain-of-thought (CoT) reasoning paradigm. To bridge the gap between 3D input and 2D-compatible MLLMs, we render surround-view images of the scene and project 3D element candidates into these views, forming a rich visual representation aligned with the scene geometry. Our CoT pipeline begins with an active perception stage, prompting the MLLM to select the most informative viewpoint based on the instruction, before proceeding with step-by-step reasoning to localize affordance elements and infer plausible interaction motions. Evaluated on the SceneFun3D dataset, AffordBot achieves state-of-the-art performance, demonstrating strong generalization and physically grounded reasoning with only 3D point cloud input and MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Predicting spatial location, motion type, and axis for 3D affordance elements
Integrating multimodal reasoning with 3D scene understanding through rendered views
Enabling instruction-driven embodied reasoning for human-agent physical collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal LLMs with tailored chain-of-thought reasoning
Rendering surround-view images from 3D point clouds
Active perception selecting optimal viewpoints for interaction
🔎 Similar Papers
No similar papers found.
X
Xinyi Wang
University of Science and Technology of China
X
Xun Yang
University of Science and Technology of China
Yanlong Xu
Yanlong Xu
University of Science and Technology of China
Y
Yuchen Wu
Singapore University of Technology and Design
Z
Zhen Li
Chinese University of Hong Kong, Shenzhen
N
Na Zhao
Singapore University of Technology and Design