Ensemble Value Functions for Efficient Exploration in Multi-Agent Reinforcement Learning

📅 2023-02-07
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of inefficient exploration in the joint action space and unstable policy coordination training in multi-agent reinforcement learning (MARL), this paper proposes EMAX—a unified value-decomposition framework (compatible with DQN, VDN, and QMIX). EMAX equips each agent with an ensemble of value functions, integrating three key mechanisms: (i) upper-confidence-bound (UCB)-guided exploration for systematic joint-action space coverage; (ii) ensemble target value averaging to suppress gradient variance; and (iii) action majority voting to enhance coordination robustness. This is the first work to jointly incorporate all three techniques within a value-decomposition architecture. Experiments demonstrate that EMAX achieves an 185% improvement in return on 11 sparse-reward generalized-sum tasks, and outperforms IDQN, VDN, and QMIX by 60%, 47%, and 538% in sample efficiency and final return, respectively, across 21 standard benchmarks.
📝 Abstract
Multi-agent reinforcement learning (MARL) requires agents to explore within a vast joint action space to find joint actions that lead to coordination. Existing value-based MARL algorithms commonly rely on random exploration, such as $epsilon$-greedy, to explore the environment which is not systematic and inefficient at identifying effective actions in multi-agent problems. Additionally, the concurrent training of the policies of multiple agents during training can render the optimisation non-stationary. This can lead to unstable value estimates, highly variant gradients, and ultimately hinder coordination between agents. To address these challenges, we propose ensemble value functions for multi-agent exploration (EMAX). EMAX is a framework to seamlessly extend value-based MARL algorithms. EMAX leverages an ensemble of value functions for each agent to guide their exploration, reduce the variance of their optimisation, and makes their policies more robust to miscoordination. EMAX achieves these benefits by (1) systematically guiding the exploration of agents with a UCB policy towards parts of the environment that require multiple agents to coordinate. (2) EMAX computes average value estimates across the ensemble as target values to reduce the variance of gradients and make optimisation more stable. (3) During evaluation, EMAX selects actions following a majority vote across the ensemble to reduce the likelihood of miscoordination. We first instantiate independent DQN with EMAX and evaluate it in 11 general-sum tasks with sparse rewards. We show that EMAX improves final evaluation returns by 185% across all tasks. We then evaluate EMAX on top of IDQN, VDN and QMIX in 21 common-reward tasks, and show that EMAX improves sample efficiency and final evaluation returns across all tasks over all three vanilla algorithms by 60%, 47%, and 538%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Improves exploration in multi-agent reinforcement learning
Reduces variance in optimization for stable training
Enhances coordination and robustness in multi-agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble value functions for exploration
UCB policy guides multi-agent coordination
Majority vote reduces miscoordination likelihood
🔎 Similar Papers
No similar papers found.