Gaussian Process-Based Active Exploration Strategies in Vision and Touch

πŸ“… 2025-07-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Robots struggle with reliable manipulation in unstructured environments due to the absence of prior knowledge about object geometry, material properties, and semantic identity. To address this, we propose Gaussian Process Distance Fields (GPDF), the first framework to integrate Gaussian processes into vision–touch cross-modal active perception. GPDF jointly models object geometry and surface attributes while providing explicit, principled uncertainty quantification. It enables incremental, large-scale-pretraining-free multimodal fusion, leveraging differentiable rendering, analytical gradient and Hessian computation, and uncertainty-driven exploration. Implemented on a Franka robot equipped with a DIGIT tactile sensor and an RGB-D camera, GPDF achieves active 3D reconstruction of static objects. Experiments demonstrate substantial improvements in geometric completion accuracy for complex shapes and validate its extensibility to surface property inference.

Technology Category

Application Category

πŸ“ Abstract
Robots struggle to understand object properties like shape, material, and semantics due to limited prior knowledge, hindering manipulation in unstructured environments. In contrast, humans learn these properties through interactive multi-sensor exploration. This work proposes fusing visual and tactile observations into a unified Gaussian Process Distance Field (GPDF) representation for active perception of object properties. While primarily focusing on geometry, this approach also demonstrates potential for modeling surface properties beyond geometry. The GPDF encodes signed distance using point cloud, analytic gradient and Hessian, and surface uncertainty estimates, which are attributes that common neural network shape representation lack. By utilizing a point cloud to construct a distance function, GPDF does not need extensive pretraining on large datasets and can incorporate observations by aggregation. Starting with an initial visual shape estimate, the framework iteratively refines the geometry by integrating dense vision measurements using differentiable rendering and tactile measurements at uncertain surface regions. By quantifying multi-sensor uncertainties, it plans exploratory motions to maximize information gain for recovering precise 3D structures. For the real-world robot experiment, we utilize the Franka Research 3 robot manipulator, which is fixed on a table and has a customized DIGIT tactile sensor and an Intel Realsense D435 RGBD camera mounted on the end-effector. In these experiments, the robot explores the shape and properties of objects assumed to be static and placed on the table. To improve scalability, we investigate approximation methods like inducing point method for Gaussian Processes. This probabilistic multi-modal fusion enables active exploration and mapping of complex object geometries, extending potentially beyond geometry.
Problem

Research questions and friction points this paper is trying to address.

Robots lack prior knowledge to understand object properties effectively
Fusing vision and tactile data for active object perception
Improving 3D object geometry and surface property modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses vision and touch into Gaussian Process Distance Field
Uses point cloud for distance function without pretraining
Plans motions to maximize information gain
πŸ”Ž Similar Papers
No similar papers found.