Advances in Multi-agent Reinforcement Learning: Persistent Autonomy and Robot Learning Lab Report 2024

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of collaborative exploration and task execution in non-stationary, partially observable environments under agent constraints (e.g., low battery, limited maneuverability), this paper proposes a hierarchical credit assignment mechanism and a role-aware policy transfer framework to enhance robustness and scalability of multi-agent reinforcement learning (MARL) under resource limitations and agent heterogeneity. Methodologically, we integrate imitation learning (IL) with the centralized training with decentralized execution (CTDE) paradigm, employ graph neural networks (GNNs) to model dynamic communication topologies, and incorporate curriculum learning alongside safety-constrained reinforcement learning. Evaluated on robotic cooperative navigation and battery-sensitive scheduling simulations, our approach achieves a 32% improvement in task completion rate, accelerates policy convergence by 2.1×, and reduces communication overhead by 47%, effectively alleviating convergence bottlenecks induced by environmental non-stationarity and the curse of dimensionality.

Technology Category

Application Category

📝 Abstract
Multi-Agent Reinforcement Learning (MARL) approaches have emerged as popular solutions to address the general challenges of cooperation in multi-agent environments, where the success of achieving shared or individual goals critically depends on the coordination and collaboration between agents. However, existing cooperative MARL methods face several challenges intrinsic to multi-agent systems, such as the curse of dimensionality, non-stationarity, and the need for a global exploration strategy. Moreover, the presence of agents with constraints (e.g., limited battery life, restricted mobility) or distinct roles further exacerbates these challenges. This document provides an overview of recent advances in Multi-Agent Reinforcement Learning (MARL) conducted at the Persistent Autonomy and Robot Learning (PeARL) lab at the University of Massachusetts Lowell. We briefly discuss various research directions and present a selection of approaches proposed in our most recent publications. For each proposed approach, we also highlight potential future directions to further advance the field.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Reinforcement Learning
Robot Team Collaboration
Complex Unstable Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Reinforcement Learning
Collaborative Abilities
Complex Unstable Environments
🔎 Similar Papers
No similar papers found.