🤖 AI Summary
This work addresses the instability and low accuracy of visual servoing for textureless objects under adverse visual conditions such as occlusion, where conventional feature-based methods fail due to insufficient visual cues. To overcome this limitation, the authors propose a tightly coupled perception-control closed-loop visual servoing framework. The approach integrates learned keypoint detection with an Extended Kalman Filter (EKF) to jointly estimate the object’s 6D pose, which drives a pose-based visual servoing (PBVS) controller. Camera motion feedback is incorporated to enhance the robustness of keypoint tracking. Furthermore, a novel uncertainty-aware probabilistic control law is introduced to enable safe and precise manipulation of textureless objects. Real-world robotic experiments demonstrate that the proposed method significantly outperforms traditional visual servoing techniques in both pose estimation accuracy and grasping success rate.
📝 Abstract
Visual servoing is fundamental to robotic applications, enabling precise positioning and control. However, applying it to textureless objects remains a challenge due to the absence of reliable visual features. Moreover, adverse visual conditions, such as occlusions, often corrupt visual feedback, leading to reduced accuracy and instability in visual servoing. In this work, we build upon learning-based keypoint detection for textureless objects and propose a method that enhances robustness by tightly integrating perception and control in a closed loop. Specifically, we employ an Extended Kalman Filter (EKF) that integrates per-frame keypoint measurements to estimate 6D object pose, which drives pose-based visual servoing (PBVS) for control. The resulting camera motion, in turn, enhances the tracking of subsequent keypoints, effectively closing the perception-control loop. Additionally, unlike standard PBVS, we propose a probabilistic control law that computes both camera velocity and its associated uncertainty, enabling uncertainty-aware control for safe and reliable operation. We validate our approach on real-world robotic platforms using quantitative metrics and grasping experiments, demonstrating that our method outperforms traditional visual servoing techniques in both accuracy and practical application.