๐ค AI Summary
Addressing the significant domain gap between simulation-based training and real-robot deployment, as well as the limited capability of existing models to effectively fuse RGB, depth, and point cloud modalities, this work introduces DROID-3Dโthe first high-quality, embodied manipulation-oriented 3D multimodal datasetโand EmbodiedMAE, a novel multimodal masked autoencoder supporting joint masked reconstruction of RGB, depth maps, and point clouds. Its key innovations include: (i) the first implementation of cross-modal feature alignment and unified 3D representation learning; and (ii) geometric consistency modeling between point clouds and images to enhance 3D perception. Evaluated on 70 simulated and 20 real-robot manipulation tasks, EmbodiedMAE consistently outperforms state-of-the-art vision foundation models: it reduces policy transfer sample requirements by 60%, improves training efficiency by 40%, and accelerates convergence by 2.3ร.
๐ Abstract
We present EmbodiedMAE, a unified 3D multi-modal representation for robot manipulation. Current approaches suffer from significant domain gaps between training datasets and robot manipulation tasks, while also lacking model architectures that can effectively incorporate 3D information. To overcome these limitations, we enhance the DROID dataset with high-quality depth maps and point clouds, constructing DROID-3D as a valuable supplement for 3D embodied vision research. Then we develop EmbodiedMAE, a multi-modal masked autoencoder that simultaneously learns representations across RGB, depth, and point cloud modalities through stochastic masking and cross-modal fusion. Trained on DROID-3D, EmbodiedMAE consistently outperforms state-of-the-art vision foundation models (VFMs) in both training efficiency and final performance across 70 simulation tasks and 20 real-world robot manipulation tasks on two robot platforms. The model exhibits strong scaling behavior with size and promotes effective policy learning from 3D inputs. Experimental results establish EmbodiedMAE as a reliable unified 3D multi-modal VFM for embodied AI systems, particularly in precise tabletop manipulation settings where spatial perception is critical.