Ontological grounding for sound and natural robot explanations via large language models

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that current robotic systems struggle to generate natural explanations in human-robot interaction that are both logically coherent and aligned with human cognitive expectations. The authors propose a novel approach integrating ontological reasoning with large language models (LLMs): the ontology ensures semantic consistency and domain grounding, while the LLM contributes contextual awareness and fluent generation capabilities. This integration enables, for the first time, a synergy between static contrastive ontological narratives and dynamic language generation, allowing robots to assess event typicality based on experience and produce clear, concise, and interactive explanations. The system further supports user-feedback-driven adaptive refinement. Experimental results demonstrate significant improvements in explanation clarity and conciseness without compromising semantic accuracy, while also providing preliminary validation of its interactive adaptability.

Technology Category

Application Category

📝 Abstract
Building effective human-robot interaction requires robots to derive conclusions from their experiences that are both logically sound and communicated in ways aligned with human expectations. This paper presents a hybrid framework that blends ontology-based reasoning with large language models (LLMs) to produce semantically grounded and natural robot explanations. Ontologies ensure logical consistency and domain grounding, while LLMs provide fluent, context-aware and adaptive language generation. The proposed method grounds data from human-robot experiences, enabling robots to reason about whether events are typical or atypical based on their properties. We integrate a state-of-the-art algorithm for retrieving and constructing static contrastive ontology-based narratives with an LLM agent that uses them to produce concise, clear, interactive explanations. The approach is validated through a laboratory study replicating an industrial collaborative task. Empirical results show significant improvements in the clarity and brevity of ontology-based narratives while preserving their semantic accuracy. Initial evaluations further demonstrate the system's ability to adapt explanations to user feedback. Overall, this work highlights the potential of ontology-LLM integration to advance explainable agency, and promote more transparent human-robot collaboration.
Problem

Research questions and friction points this paper is trying to address.

human-robot interaction
explainable agency
ontology-based reasoning
natural language generation
robot explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

ontology-based reasoning
large language models
explainable agency
human-robot interaction
contrastive narratives
🔎 Similar Papers
No similar papers found.
Alberto Olivares-Alarcos
Alberto Olivares-Alarcos
Postdoctoral scientist, Institut de Robòtica i Informàtica Industrial, CSIC-UPC
Applied Ontology & RoboticsExplainable RobotsCollaborative Robots
M
Muhammad Ahsan
Institute of Electrical and Control Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
S
Satrio Sanjaya
Institute of Electrical and Control Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
H
Hsien-I Lin
Institute of Electrical and Control Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
G
Guillem Alenyà
Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain