Experiential Explanations for Reinforcement Learning

📅 2022-10-10
🏛️ Neural computing & applications (Print)
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) policies suffer from poor interpretability due to their sequential decision-making nature and the loss of qualitative reward attribution during training, hindering non-expert understanding and intervention. To address this, we propose “experiential explanation,” a novel paradigm that jointly trains a policy network and an influence predictor to explicitly model causal relationships between states and rewards, generating counterfactual-based, human-understandable explanations. Our approach is the first to systematically recover reward-source attribution—previously discarded in RL training—enabling users to predict and intervene in agent behavior. In two human-subject evaluations, our method significantly improves user accuracy in predicting agent actions and consistently outperforms existing baselines across five dimensions: understandability, completeness, satisfaction, practicality, and explanatory accuracy.
📝 Abstract
Reinforcement learning (RL) systems can be complex and non-interpretable, making it challenging for non-AI experts to understand or intervene in their decisions. This is due in part to the sequential nature of RL in which actions are chosen because of their likelihood of obtaining future rewards. However, RL agents discard the qualitative features of their training, making it difficult to recover user-understandable information for “why” an action is chosen. We propose a technique Experiential Explanations to generate counterfactual explanations by training influence predictors along with the RL policy. Influence predictors are models that learn how different sources of reward affect the agent in different states, thus restoring information about how the policy reflects the environment. Two human evaluation studies revealed that participants presented with Experiential Explanations were better able to correctly guess what an agent would do than those presented with other standard types of explanation. Participants also found that Experiential Explanations are more understandable, satisfying, complete, useful, and accurate. Qualitative analysis provides information on the factors of Experiential Explanations that are most useful and the desired characteristics that participants seek from the explanations.
Problem

Research questions and friction points this paper is trying to address.

RL systems lack interpretability for non-experts
Agents discard qualitative training features for explanations
Need understandable counterfactual RL action explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training influence predictors with RL policy
Generating counterfactual explanations for actions
Enhancing understandability via qualitative reward analysis
🔎 Similar Papers
No similar papers found.
Amal Alabdulkarim
Amal Alabdulkarim
Computer Science PhD Student at Georgia Institute of Technology
M
Mark O. Riedl
School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia, United States.