🤖 AI Summary
This work addresses the challenge of multimodal intent understanding and context-aware real-time assistance in AR/VR industrial training and on-site operational support. We propose an incremental multimodal prompting framework that fuses heterogeneous signals—including eye gaze, hand gestures, task progression, and dialogue history—to construct a lightweight, environment-aware large language model (LLM) assistant. The framework is trained and evaluated on the HoloAssist dataset. Our key contribution is the systematic quantification of differential contributions from individual modalities to LLM response quality, demonstrating that multimodal fusion significantly improves answer accuracy (+23.6%) and task relevance (+19.4%). Results show that the approach enables a scalable, low-latency, high-fidelity interaction paradigm for AR/VR intelligent assistance, advancing the deployment of foundation models in embodied industrial intelligence.
📝 Abstract
The growing adoption of augmented and virtual reality (AR and VR) technologies in industrial training and on-the-job assistance has created new opportunities for intelligent, context-aware support systems. As workers perform complex tasks guided by AR and VR, these devices capture rich streams of multimodal data, including gaze, hand actions, and task progression, that can reveal user intent and task state in real time. Leveraging this information effectively remains a major challenge. In this work, we present a context-aware large language model (LLM) assistant that integrates diverse data modalities, such as hand actions, task steps, and dialogue history, into a unified framework for real-time question answering. To systematically study how context influences performance, we introduce an incremental prompting framework, where each model version receives progressively richer contextual inputs. Using the HoloAssist dataset, which records AR-guided task executions, we evaluate how each modality contributes to the assistant's effectiveness. Our experiments show that incorporating multimodal context significantly improves the accuracy and relevance of responses. These findings highlight the potential of LLM-driven multimodal integration to enable adaptive, intuitive assistance for AR and VR-based industrial training and assistance.