๐ค AI Summary
Traditional robot teaching-by-demonstration and teleoperation suffer from low intuitiveness, poor spatial precision, and difficulty in specifying complex 3D tasks. To address these challenges, this paper introduces GhostObjectsโa spatially aligned virtual twin interaction paradigm enabled by augmented reality (AR). Users manipulate virtual objects via natural hand gestures to non-invasively specify target poses and spatial constraints for physical robots. The system integrates high-fidelity spatial registration, geometric snapping, multi-object lasso selection, and pose rebounding, achieving millisecond-scale responsiveness and sub-centimeter positioning accuracy. Crucially, GhostObjects requires no programming expertise or specialized hardware, significantly enhancing the intuitiveness, flexibility, and expressive power of spatial task specification. It supports not only grasp-and-place operations but also multi-stage collaborative spatial tasks. By providing a scalable, intuitive interaction infrastructure, GhostObjects advances natural human-robot coexistence and shared autonomy.
๐ Abstract
Robots are increasingly capable of autonomous operations, yet human interaction remains essential for issuing personalized instructions. Instead of directly controlling robots through Programming by Demonstration (PbD) or teleoperation, we propose giving instructions by interacting with GhostObjects-world-aligned, life-size virtual twins of physical objects-in augmented reality (AR). By direct manipulation of GhostObjects, users can precisely specify physical goals and spatial parameters, with features including real-world lasso selection of multiple objects and snapping back to default positions, enabling tasks beyond simple pick-and-place.