EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 10
Influential: 3
📄 PDF
🤖 AI Summary
Existing 3D occupancy prediction methods primarily target offline, single-view perception and thus fail to support embodied agents in dynamically constructing scene understanding during active exploration. This work introduces embodied 3D occupancy prediction—a novel task enabling online, incremental, global scene modeling from progressive visual observations. Our approach features: (1) the first embodied prediction paradigm that employs an updateable 3D semantic Gaussian field as an explicit global memory; (2) a deformable cross-attention feature fusion mechanism coupled with a Gaussian-to-voxel splatting scheme for efficient joint geometric-semantic optimization; and (3) EmbodiedOcc-ScanNet—the first benchmark tailored for embodied scene understanding. Experiments demonstrate that our method significantly outperforms existing local prediction approaches in accuracy, real-time update capability, and scalability.

Technology Category

Application Category

📝 Abstract
3D occupancy prediction provides a comprehensive description of the surrounding scenes and has become an essential task for 3D perception. Most existing methods focus on offline perception from one or a few views and cannot be applied to embodied agents which demands to gradually perceive the scene through progressive embodied exploration. In this paper, we formulate an embodied 3D occupancy prediction task to target this practical scenario and propose a Gaussian-based EmbodiedOcc framework to accomplish it. We initialize the global scene with uniform 3D semantic Gaussians and progressively update local regions observed by the embodied agent. For each update, we extract semantic and structural features from the observed image and efficiently incorporate them via deformable cross-attention to refine the regional Gaussians. Finally, we employ Gaussian-to-voxel splatting to obtain the global 3D occupancy from the updated 3D Gaussians. Our EmbodiedOcc assumes an unknown (i.e., uniformly distributed) environment and maintains an explicit global memory of it with 3D Gaussians. It gradually gains knowledge through the local refinement of regional Gaussians, which is consistent with how humans understand new scenes through embodied exploration. We reorganize an EmbodiedOcc-ScanNet benchmark based on local annotations to facilitate the evaluation of the embodied 3D occupancy prediction task. Experiments demonstrate that our EmbodiedOcc outperforms existing local prediction methods and accomplishes the embodied occupancy prediction with high accuracy and strong expandability. Code: https://github.com/YkiWu/EmbodiedOcc.
Problem

Research questions and friction points this paper is trying to address.

Predicting 3D occupancy progressively through embodied exploration
Understanding unknown environments via vision-based online perception
Refining 3D semantic Gaussians through local observation updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian-based framework for embodied 3D occupancy
Deformable cross-attention refines regional semantic Gaussians
Gaussian-to-voxel splatting generates global occupancy maps
🔎 Similar Papers
No similar papers found.
Yuqi Wu
Yuqi Wu
PhD , University of Alberta, Fudan University
Natural Language ProcessingComputational PsychiatryLarge Language Models
Wenzhao Zheng
Wenzhao Zheng
EECS, University of California, Berkeley
Large ModelsEmbodied AgentsAutonomous Driving
S
Sicheng Zuo
Department of Automation, Tsinghua University, China
Yuanhui Huang
Yuanhui Huang
Tsinghua University
Computer VisionAutonomous Driving
J
Jie Zhou
Department of Automation, Tsinghua University, China
J
Jiwen Lu
Department of Automation, Tsinghua University, China