Evaluating and Improving Graph-based Explanation Methods for Multi-Agent Coordination

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural network (GNN) explanation methods lack sufficient interpretability for communication channels in multi-agent collaborative decision-making. Method: We systematically evaluate the limitations of current GNN explainers in identifying critical communication links and propose an attention entropy regularization mechanism to enhance the explainability-friendliness of Graph Attention Network (GAT)-based policies. This method imposes theoretically grounded information entropy constraints to improve discriminability between the explanation subgraph and its complement. Contribution/Results: Our approach maintains high task performance (success rate ≥98.5%) while significantly improving explanation quality—achieving an average 12.7% AUC gain across three tasks and three scalability settings. It is the first work to rigorously characterize the applicability boundaries of graph explanation methods in multi-agent settings, establishing a novel paradigm and practical toolkit for explainable multi-agent learning.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs), developed by the graph learning community, have been adopted and shown to be highly effective in multi-robot and multi-agent learning. Inspired by this successful cross-pollination, we investigate and characterize the suitability of existing GNN explanation methods for explaining multi-agent coordination. We find that these methods have the potential to identify the most-influential communication channels that impact the team's behavior. Informed by our initial analyses, we propose an attention entropy regularization term that renders GAT-based policies more amenable to existing graph-based explainers. Intuitively, minimizing attention entropy incentivizes agents to limit their attention to the most influential or impactful agents, thereby easing the challenge faced by the explainer. We theoretically ground this intuition by showing that minimizing attention entropy increases the disparity between the explainer-generated subgraph and its complement. Evaluations across three tasks and three team sizes i) provides insights into the effectiveness of existing explainers, and ii) demonstrates that our proposed regularization consistently improves explanation quality without sacrificing task performance.
Problem

Research questions and friction points this paper is trying to address.

Improving GNN explanation methods
Identifying influential communication channels
Enhancing multi-agent coordination explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

GNNs enhance multi-agent coordination
Attention entropy regularization improves explanations
Minimizing entropy focuses on influential agents
🔎 Similar Papers
Siva Kailas
Siva Kailas
Carnegie Mellon University
Multi-Agent LearningArtificial IntelligenceMulti-Robot SystemsComputer Vision
S
Shalin Jain
School of Interactive Computing, College of Computing, Georgia Institute of Technology, Atlanta, Georgia, United States of America
H
H. Ravichandar
School of Interactive Computing, College of Computing, Georgia Institute of Technology, Atlanta, Georgia, United States of America