Explainable OOHRI: Communicating Robot Capabilities and Limitations as Augmented Reality Affordances

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that current robots struggle to transparently convey their operational capabilities and limitations during human-robot interaction, often leading to inefficient user commands or collaboration breakdowns. The paper presents the first HRI system integrating augmented reality (AR) with an object-oriented explainability framework to dynamically visualize a robot’s action possibilities and constraints on physical objects within a spatially aligned virtual environment. Leveraging visual markers, radial menus, color coding, and explanatory labels, the system renders interpretable affordances in real time. It incorporates a vision-language model to construct semantic object representations and enables users to interact directly via virtual twins through mixed-initiative control. User studies demonstrate that this approach significantly improves participants’ understanding of robot capabilities, enabling them to issue accurate object-oriented commands and effectively co-repair task failures.

Technology Category

Application Category

📝 Abstract
Human interaction is essential for issuing personalized instructions and assisting robots when failure is likely. However, robots remain largely black boxes, offering users little insight into their evolving capabilities and limitations. To address this gap, we present explainable object-oriented HRI (X-OOHRI), an augmented reality (AR) interface that conveys robot action possibilities and constraints through visual signifiers, radial menus, color coding, and explanation tags. Our system encodes object properties and robot limits into object-oriented structures using a vision-language model, allowing explanation generation on the fly and direct manipulation of virtual twins spatially aligned within a simulated environment. We integrate the end-to-end pipeline with a physical robot and showcase diverse use cases ranging from low-level pick-and-place to high-level instructions. Finally, we evaluate X-OOHRI through a user study and find that participants effectively issue object-oriented commands, develop accurate mental models of robot limitations, and engage in mixed-initiative resolution.
Problem

Research questions and friction points this paper is trying to address.

explainable HRI
robot capabilities
robot limitations
augmented reality
human-robot interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable HRI
Augmented Reality
Object-Oriented Interaction
Vision-Language Model
Virtual Twin
🔎 Similar Papers
No similar papers found.