🤖 AI Summary
To address the poor robustness and limited generalization of image-based visual servoing (IBVS) under occlusion and environmental variations, this paper proposes a task- and object-agnostic visual servoing method. The core innovation lies in the first integration of semantic features from a pre-trained Vision Transformer (ViT) into the IBVS framework, enabling zero-shot cross-object and cross-scene generalization via joint image feature matching and Jacobian estimation. Evaluated in sim-to-real transfer, the method achieves full convergence under nominal conditions; under disturbances, it reduces positioning error by 31.2% compared to classical IBVS while matching the convergence rate of supervised learning methods. Real-world experiments demonstrate applicability to industrial bin-picking and grasping of unseen objects, requiring only category-level reference images. This work bridges classical IBVS and learning-based approaches, significantly enhancing robustness and generalization without task-specific training.
📝 Abstract
Visual servoing enables robots to precisely position their end-effector relative to a target object. While classical methods rely on hand-crafted features and thus are universally applicable without task-specific training, they often struggle with occlusions and environmental variations, whereas learning-based approaches improve robustness but typically require extensive training. We present a visual servoing approach that leverages pretrained vision transformers for semantic feature extraction, combining the advantages of both paradigms while also being able to generalize beyond the provided sample. Our approach achieves full convergence in unperturbed scenarios and surpasses classical image-based visual servoing by up to 31.2% relative improvement in perturbed scenarios. Even the convergence rates of learning-based methods are matched despite requiring no task- or object-specific training. Real-world evaluations confirm robust performance in end-effector positioning, industrial box manipulation, and grasping of unseen objects using only a reference from the same category. Our code and simulation environment are available at: https://alessandroscherl.github.io/ViT-VS/