🤖 AI Summary
This work addresses the challenge that current robotic systems struggle to generate natural explanations in human-robot interaction that are both logically coherent and aligned with human cognitive expectations. The authors propose a novel approach integrating ontological reasoning with large language models (LLMs): the ontology ensures semantic consistency and domain grounding, while the LLM contributes contextual awareness and fluent generation capabilities. This integration enables, for the first time, a synergy between static contrastive ontological narratives and dynamic language generation, allowing robots to assess event typicality based on experience and produce clear, concise, and interactive explanations. The system further supports user-feedback-driven adaptive refinement. Experimental results demonstrate significant improvements in explanation clarity and conciseness without compromising semantic accuracy, while also providing preliminary validation of its interactive adaptability.
📝 Abstract
Building effective human-robot interaction requires robots to derive conclusions from their experiences that are both logically sound and communicated in ways aligned with human expectations. This paper presents a hybrid framework that blends ontology-based reasoning with large language models (LLMs) to produce semantically grounded and natural robot explanations. Ontologies ensure logical consistency and domain grounding, while LLMs provide fluent, context-aware and adaptive language generation. The proposed method grounds data from human-robot experiences, enabling robots to reason about whether events are typical or atypical based on their properties. We integrate a state-of-the-art algorithm for retrieving and constructing static contrastive ontology-based narratives with an LLM agent that uses them to produce concise, clear, interactive explanations. The approach is validated through a laboratory study replicating an industrial collaborative task. Empirical results show significant improvements in the clarity and brevity of ontology-based narratives while preserving their semantic accuracy. Initial evaluations further demonstrate the system's ability to adapt explanations to user feedback. Overall, this work highlights the potential of ontology-LLM integration to advance explainable agency, and promote more transparent human-robot collaboration.