🤖 AI Summary
This work proposes an “intention-to-action” framework that redefines virtual reality hand–eye interaction by shifting from predefined gestures to an intention-driven paradigm grounded in commonsense reasoning and user habits. By fusing eye-gaze and hand-motion data to infer spatial intent and leveraging a large language model for natural language–based reasoning, the system enables agents to execute tasks without requiring users to memorize specific gestures. The approach accommodates personalized action preferences and exhibits high fault tolerance. Evaluated across more than 60 tasks, it achieves a 97.2% intention recognition accuracy—significantly outperforming the 93.1% accuracy of conventional methods—while substantially reducing arm fatigue and enhancing both usability and user satisfaction.
📝 Abstract
Eye-hand coordinated interaction is becoming a mainstream interaction modality in Virtual Reality (VR) user interfaces.Current paradigms for this multimodal interaction require users to learn predefined gestures and memorize multiple gesture-task associations, which can be summarized as an ``Operation-to-Intent" paradigm. This paradigm increases users' learning costs and has low interaction error tolerance. In this paper, we propose SIAgent, a novel "Intent-to-Operation" framework allowing users to express interaction intents through natural eye-hand motions based on common sense and habits. Our system features two main components: (1) intent recognition that translates spatial interaction data into natural language and infers user intent, and (2) agent-based execution that generates an agent to execute corresponding tasks. This eliminates the need for gesture memorization and accommodates individual motion preferences with high error tolerance. We conduct two user studies across over 60 interaction tasks, comparing our method with two "Operation-to-Intent" techniques. Results show our method achieves higher intent recognition accuracy than gaze + pinch interaction (97.2% vs 93.1%) while reducing arm fatigue and improving usability, and user preference. Another study verifies the function of eye gaze and hand motion channels in intent recognition. Our work offers valuable insights into enhancing VR interaction intelligence through intent-driven design. Our source code and LLM prompts will be made available upon publication.