MIXRTs: Toward Interpretable Multi-Agent Reinforcement Learning via Mixing Recurrent Soft Decision Trees

📅 2022-09-15
🏛️ IEEE Transactions on Pattern Analysis and Machine Intelligence
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-agent reinforcement learning (MARL) methods rely on opaque black-box neural networks, yielding uninterpretable decision-making processes; meanwhile, mainstream explainability techniques suffer from limited expressiveness and suboptimal performance. To address this, we propose Mixed Recurrent Soft Decision Trees (MIXRTs), a novel architecture that unifies high performance with strong interpretability within the value decomposition framework. MIXRTs introduce the first recurrent soft decision tree, enabling differentiable, feature-level modeling of decision paths. By linearly mixing local action-value estimates, MIXRTs theoretically guarantee additivity and monotonicity—explicitly revealing individual agent contributions and cooperative mechanisms. Empirically, MIXRTs match state-of-the-art black-box MARL methods on challenging benchmarks including Spread and StarCraft II, while providing end-to-end interpretable decision paths, quantitative feature attribution, and transparent credit assignment.
📝 Abstract
While achieving tremendous success in various fields, existing multi-agent reinforcement learning (MARL) with a black-box neural network makes decisions in an opaque manner that hinders humans from understanding the learned knowledge and how input observations influence decisions. In contrast, existing interpretable approaches usually suffer from weak expressivity and low performance. To bridge this gap, we propose MIXing Recurrent soft decision Trees (MIXRTs), a novel interpretable architecture that can represent explicit decision processes via the root-to-leaf path and reflect each agent's contribution to the team. Specifically, we construct a novel soft decision tree using a recurrent structure and demonstrate which features influence the decision-making process. Then, based on the value decomposition framework, we linearly assign credit to each agent by explicitly mixing individual action values to estimate the joint action value using only local observations, providing new insights into interpreting the cooperation mechanism. Theoretical analysis confirms that MIXRTs guarantee additivity and monotonicity in the factorization of joint action values. Evaluations on complex tasks like Spread and StarCraft II demonstrate that MIXRTs compete with existing methods while providing clear explanations, paving the way for interpretable and high-performing MARL systems.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability in multi-agent reinforcement learning systems.
Improve expressivity and performance of interpretable MARL approaches.
Provide clear explanations for decision-making and cooperation mechanisms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recurrent soft decision trees for interpretability
Value decomposition for agent credit assignment
Local observations for joint action value estimation
🔎 Similar Papers
No similar papers found.
Z
Zichuan Liu
Department of Control Science and Intelligent Engineering, School of Management and Engineering, Nanjing University, Nanjing 210093, China
Yuanyang Zhu
Yuanyang Zhu
Nanjing University
Reinforcement learningInterpretabilityMachine learningAI4Science
Z
Zhi Wang
Department of Control Science and Intelligent Engineering, School of Management and Engineering, Nanjing University, Nanjing 210093, China
Chunlin Chen
Chunlin Chen
Nanjing University
Reinforcement LearningQuantum ControlMobile Robotics