๐ค AI Summary
Existing methods struggle to bridge the semantic gap between complex natural language instructions and precise 3D object localization in open-world settings, failing to jointly achieve strong reasoning capability and fine-grained 3D spatial understanding. This paper introduces MLLM-Agent, a novel framework that integrates multimodal large language models (MLLMs) with 3D Gaussian Splatting representations to enable inference-based 3D segmentation and editingโwithout requiring any 3D-specific post-training. Its core innovation is a global-to-local spatial localization strategy that mitigates viewpoint sensitivity; by rendering multi-view images from Gaussian Splatting and feeding them into an MLLM for parallel reasoning, the framework supports ambiguous, reasoning-intensive instructions for object localization, removal, replacement, and style transfer. It achieves state-of-the-art performance on LERF, 3D-OVS, and REALM3D benchmarks, marking the first end-to-end editable 3D vision-language understanding system for open-domain scenarios.
๐ Abstract
Bridging the gap between complex human instructions and precise 3D object grounding remains a significant challenge in vision and robotics. Existing 3D segmentation methods often struggle to interpret ambiguous, reasoning-based instructions, while 2D vision-language models that excel at such reasoning lack intrinsic 3D spatial understanding. In this paper, we introduce REALM, an innovative MLLM-agent framework that enables open-world reasoning-based segmentation without requiring extensive 3D-specific post-training. We perform segmentation directly on 3D Gaussian Splatting representations, capitalizing on their ability to render photorealistic novel views that are highly suitable for MLLM comprehension. As directly feeding one or more rendered views to the MLLM can lead to high sensitivity to viewpoint selection, we propose a novel Global-to-Local Spatial Grounding strategy. Specifically, multiple global views are first fed into the MLLM agent in parallel for coarse-level localization, aggregating responses to robustly identify the target object. Then, several close-up novel views of the object are synthesized to perform fine-grained local segmentation, yielding accurate and consistent 3D masks. Extensive experiments show that REALM achieves remarkable performance in interpreting both explicit and implicit instructions across LERF, 3D-OVS, and our newly introduced REALM3D benchmarks. Furthermore, our agent framework seamlessly supports a range of 3D interaction tasks, including object removal, replacement, and style transfer, demonstrating its practical utility and versatility. Project page: https://ChangyueShi.github.io/REALM.