Effective Explanations for Belief-Desire-Intention Robots: When and What to Explain

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
BDI robots performing everyday kitchen cleaning tasks often exhibit anomalous behaviors that confuse users, undermining transparency and trust. Method: This paper proposes a context-aware explanation generation and triggering mechanism integrated into the BDI architecture. It introduces two algorithms embeddable within the BDI reasoning cycle: (i) a dynamic anomaly detection algorithm that identifies explanation-worthy behaviors by jointly modeling user preferences and the agent’s beliefs, desires, and intentions; and (ii) an explanation generation algorithm producing concise, intention-grounded, and context-sensitive explanations—explicitly referencing environmental states and task goals. Contribution/Results: Experiments demonstrate that users significantly prefer these short, contextualized explanations when encountering unexpected robot behavior, leading to substantial improvements in comprehension and trust. To our knowledge, this is the first work to tightly couple explanation triggering and generation with the core BDI decision-making loop, thereby co-optimizing explainability and autonomous reasoning.

Technology Category

Application Category

📝 Abstract
When robots perform complex and context-dependent tasks in our daily lives, deviations from expectations can confuse users. Explanations of the robot's reasoning process can help users to understand the robot intentions. However, when to provide explanations and what they contain are important to avoid user annoyance. We have investigated user preferences for explanation demand and content for a robot that helps with daily cleaning tasks in a kitchen. Our results show that users want explanations in surprising situations and prefer concise explanations that clearly state the intention behind the confusing action and the contextual factors that were relevant to this decision. Based on these findings, we propose two algorithms to identify surprising actions and to construct effective explanations for Belief-Desire-Intention (BDI) robots. Our algorithms can be easily integrated in the BDI reasoning process and pave the way for better human-robot interaction with context- and user-specific explanations.
Problem

Research questions and friction points this paper is trying to address.

When to explain robot actions to avoid user confusion
What content to include in robot explanations for clarity
How to generate effective explanations for BDI robots
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identify surprising actions algorithmically
Construct concise intention-based explanations
Integrate explanations in BDI reasoning
🔎 Similar Papers
No similar papers found.
C
Cong Wang
LASR Lab, TU Dresden, Dresden, Germany
R
Roberto Calandra
LASR Lab, TU Dresden, Dresden, Germany
Verena Klös
Verena Klös
Carl von Ossietzky Universität Oldenburg
explainable CPSAdaptive SystemsIntelligent SystemsFormal MethodsVerification