🤖 AI Summary
Existing 3D-aware pretraining methods are constrained by Euclidean space, limiting their ability to effectively model hierarchical structural relationships among embeddings and thereby hindering the robustness and generalization of robotic spatial perception. This work proposes HyperMVP, a novel framework that introduces hyperbolic geometry into multi-view self-supervised 3D pretraining for the first time. By leveraging a GeoLink encoder to learn structured point cloud representations in hyperbolic space and optimizing them through a masked autoencoding mechanism, HyperMVP captures intrinsic hierarchical structures more effectively. After pretraining on the large-scale 3D-MOV dataset, fine-tuned visuomotor policies demonstrate significant improvements over strong baselines across COLOSSEUM, RLBench, and real-world scenarios, with notably enhanced task success rates and robustness—particularly under perturbed environmental conditions.
📝 Abstract
3D-aware visual pretraining has proven effective in improving the performance of downstream robotic manipulation tasks. However, existing methods are constrained to Euclidean embedding spaces, whose flat geometry limits their ability to model structural relations among embeddings. As a result, they struggle to learn structured embeddings that are essential for robust spatial perception in robotic applications. To this end, we propose HyperMVP, a self-supervised framework for \underline{Hyper}bolic \underline{M}ulti\underline{V}iew \underline{P}retraining. Hyperbolic space offers geometric properties well suited for capturing structural relations. Methodologically, we extend the masked autoencoder paradigm and design a GeoLink encoder to learn multiview hyperbolic representations. The pretrained encoder is then finetuned with visuomotor policies on manipulation tasks. In addition, we introduce 3D-MOV, a large-scale dataset comprising multiple types of 3D point clouds to support pretraining. We evaluate HyperMVP on COLOSSEUM, RLBench, and real-world scenarios, where it consistently outperforms strong baselines across diverse tasks and perturbation settings. Our results highlight the potential of 3D-aware pretraining in a non-Euclidean space for learning robust and generalizable robotic manipulation policies.