🤖 AI Summary
This work proposes Successive Sub-value Q-learning (S2Q), a novel multi-agent reinforcement learning (MARL) approach that addresses the limitation of existing methods which rely solely on a single optimal action and often converge to suboptimal policies when value functions evolve dynamically. S2Q is the first MARL algorithm to explicitly model multiple sub-optimal value functions, preserving high-value alternative actions and integrating a Softmax behavioral policy to sustain effective exploration. Built upon a value decomposition framework, S2Q significantly enhances the ability to track dynamically shifting optimal policies while improving exploration efficiency. Empirical evaluations demonstrate that S2Q consistently outperforms state-of-the-art algorithms across several challenging MARL benchmarks, exhibiting superior adaptability and overall performance.
📝 Abstract
Value decomposition is a core approach for cooperative multi-agent reinforcement learning (MARL). However, existing methods still rely on a single optimal action and struggle to adapt when the underlying value function shifts during training, often converging to suboptimal policies. To address this limitation, we propose Successive Sub-value Q-learning (S2Q), which learns multiple sub-value functions to retain alternative high-value actions. Incorporating these sub-value functions into a Softmax-based behavior policy, S2Q encourages persistent exploration and enables $Q^{\text{tot}}$ to adjust quickly to the changing optima. Experiments on challenging MARL benchmarks confirm that S2Q consistently outperforms various MARL algorithms, demonstrating improved adaptability and overall performance. Our code is available at https://github.com/hyeon1996/S2Q.