Contextual Knowledge Sharing in Multi-Agent Reinforcement Learning with Decentralized Communication and Coordination

๐Ÿ“… 2025-01-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses decentralized multi-agent reinforcement learning (Dec-MARL) under target heterogeneity and limited observability. To bridge the longstanding decoupling of communication and decision-making in Dec-MARL, we propose a dynamic knowledge-sharing framework that tightly integrates communication and coordination. Its core innovation is the first joint embedding of target-awareness and time-awareness into the knowledge-sharing mechanism. Specifically, the framework comprises: (i) an attention-driven contextual gating communication module; (ii) a targetโ€“temporal joint encoder; (iii) a decentralized policy network; and (iv) temporal knowledge decay modeling. Extensive experiments in dynamic obstacle environments with multi-task settings demonstrate a 23.7% improvement in average task completion rate and a 41% increase in knowledge utilization efficiency, significantly enhancing collaborative robustness and environmental adaptability.

Technology Category

Application Category

๐Ÿ“ Abstract
Decentralized Multi-Agent Reinforcement Learning (Dec-MARL) has emerged as a pivotal approach for addressing complex tasks in dynamic environments. Existing Multi-Agent Reinforcement Learning (MARL) methodologies typically assume a shared objective among agents and rely on centralized control. However, many real-world scenarios feature agents with individual goals and limited observability of other agents, complicating coordination and hindering adaptability. Existing Dec-MARL strategies prioritize either communication or coordination, lacking an integrated approach that leverages both. This paper presents a novel Dec-MARL framework that integrates peer-to-peer communication and coordination, incorporating goal-awareness and time-awareness into the agents' knowledge-sharing processes. Our framework equips agents with the ability to (i) share contextually relevant knowledge to assist other agents, and (ii) reason based on information acquired from multiple agents, while considering their own goals and the temporal context of prior knowledge. We evaluate our approach through several complex multi-agent tasks in environments with dynamically appearing obstacles. Our work demonstrates that incorporating goal-aware and time-aware knowledge sharing significantly enhances overall performance.
Problem

Research questions and friction points this paper is trying to address.

Decentralized Multi-Agent Reinforcement Learning
Information Sharing
Collaboration in Dynamic Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized Multi-Agent Reinforcement Learning
Dynamic Environment Adaptability
Enhanced Communication and Collaboration
๐Ÿ”Ž Similar Papers
No similar papers found.
Hung Du
Hung Du
Applied Artificial Intelligence Institute - Deakin University
Deep Reinforcement LearningMulti-agent SystemsContext-aware SystemsTranslational Research
S
Srikanth Thudumu
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
H
Hy Nguyen
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
Rajesh Vasa
Rajesh Vasa
Head of Translational Research, Applied Artificial Intelligence Institute, Deakin University
Artificial IntelligenceSoftware EvolutionAutomated Software EngineeringTools
K
K. Mouzakis
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia