Teaching LLMs to See and Guide: Context-Aware Real-Time Assistance in Augmented Reality

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of multimodal intent understanding and context-aware real-time assistance in AR/VR industrial training and on-site operational support. We propose an incremental multimodal prompting framework that fuses heterogeneous signals—including eye gaze, hand gestures, task progression, and dialogue history—to construct a lightweight, environment-aware large language model (LLM) assistant. The framework is trained and evaluated on the HoloAssist dataset. Our key contribution is the systematic quantification of differential contributions from individual modalities to LLM response quality, demonstrating that multimodal fusion significantly improves answer accuracy (+23.6%) and task relevance (+19.4%). Results show that the approach enables a scalable, low-latency, high-fidelity interaction paradigm for AR/VR intelligent assistance, advancing the deployment of foundation models in embodied industrial intelligence.

Technology Category

Application Category

📝 Abstract
The growing adoption of augmented and virtual reality (AR and VR) technologies in industrial training and on-the-job assistance has created new opportunities for intelligent, context-aware support systems. As workers perform complex tasks guided by AR and VR, these devices capture rich streams of multimodal data, including gaze, hand actions, and task progression, that can reveal user intent and task state in real time. Leveraging this information effectively remains a major challenge. In this work, we present a context-aware large language model (LLM) assistant that integrates diverse data modalities, such as hand actions, task steps, and dialogue history, into a unified framework for real-time question answering. To systematically study how context influences performance, we introduce an incremental prompting framework, where each model version receives progressively richer contextual inputs. Using the HoloAssist dataset, which records AR-guided task executions, we evaluate how each modality contributes to the assistant's effectiveness. Our experiments show that incorporating multimodal context significantly improves the accuracy and relevance of responses. These findings highlight the potential of LLM-driven multimodal integration to enable adaptive, intuitive assistance for AR and VR-based industrial training and assistance.
Problem

Research questions and friction points this paper is trying to address.

Integrating multimodal data for real-time AR assistance
Evaluating context influence on LLM performance incrementally
Improving response accuracy with multimodal context integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates multimodal data into unified real-time framework
Uses incremental prompting to evaluate contextual input impact
Leverages LLMs for adaptive assistance in AR and VR
🔎 Similar Papers
No similar papers found.
M
Mahya Qorbani
H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Kamran Paynabar
Kamran Paynabar
Unknown affiliation
Mohsen Moghaddam
Mohsen Moghaddam
Georgia Institute of Technology
Human-Machine InteractionExtended RealityArtificial IntelligenceMachine Learning