🤖 AI Summary
Robot explanations in human-robot interaction often lack contextual adaptivity, undermining transparency and user trust. Method: This paper proposes a Petri net–based context-adaptive explanation generation framework—the first to integrate Petri nets into robotic explanation systems. It formally models dynamic contextual cues (e.g., user attention, co-presence) to precisely capture concurrency, causal dependencies, and state transitions; formal verification ensures deadlock-freedom, boundedness, liveness, and context-sensitive reachability. Results: Experiments demonstrate strong robustness and real-time responsiveness across diverse interaction scenarios. The framework safely and reliably generates natural-language explanations aligned with the current context, significantly enhancing explanation transparency and user trust.
📝 Abstract
In human-robot interaction, robots must communicate in a natural and transparent manner to foster trust, which requires adapting their communication to the context. In this paper, we propose using Petri nets (PNs) to model contextual information for adaptive robot explanations. PNs provide a formal, graphical method for representing concurrent actions, causal dependencies, and system states, making them suitable for analyzing dynamic interactions between humans and robots. We demonstrate this approach through a scenario involving a robot that provides explanations based on contextual cues such as user attention and presence. Model analysis confirms key properties, including deadlock-freeness, context-sensitive reachability, boundedness, and liveness, showing the robustness and flexibility of PNs for designing and verifying context-adaptive explanations in human-robot interactions.