Learning to Coordinate via Quantum Entanglement in Multi-Agent Reinforcement Learning

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of coordination in multi-agent reinforcement learning under communication constraints by proposing the first communication-free coordination framework based on shared quantum entanglement. By introducing a differentiable quantum measurement mechanism and a “quantum coordinator–local executor” policy architecture, the approach overcomes the limitations of classical shared randomness and enables efficient decentralized decision-making. The method successfully learns coordination strategies that exhibit quantum advantage in both one-shot black-box games and decentralized partially observable Markov decision processes. This study represents the first integration of quantum entanglement as a coordination resource into multi-agent reinforcement learning, achieving significantly enhanced collaborative performance without inter-agent communication.

Technology Category

Application Category

📝 Abstract
The inability to communicate poses a major challenge to coordination in multi-agent reinforcement learning (MARL). Prior work has explored correlating local policies via shared randomness, sometimes in the form of a correlation device, as a mechanism to assist in decentralized decision-making. In contrast, this work introduces the first framework for training MARL agents to exploit shared quantum entanglement as a coordination resource, which permits a larger class of communication-free correlated policies than shared randomness alone. This is motivated by well-known results in quantum physics which posit that, for certain single-round cooperative games with no communication, shared quantum entanglement enables strategies that outperform those that only use shared randomness. In such cases, we say that there is quantum advantage. Our framework is based on a novel differentiable policy parameterization that enables optimization over quantum measurements, together with a novel policy architecture that decomposes joint policies into a quantum coordinator and decentralized local actors. To illustrate the effectiveness of our proposed method, we first show that we can learn, purely from experience, strategies that attain quantum advantage in single-round games that are treated as black box oracles. We then demonstrate how our machinery can learn policies with quantum advantage in an illustrative multi-agent sequential decision-making problem formulated as a decentralized partially observable Markov decision process (Dec-POMDP).
Problem

Research questions and friction points this paper is trying to address.

multi-agent reinforcement learning
coordination
quantum entanglement
decentralized decision-making
communication-free coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantum entanglement
multi-agent reinforcement learning
quantum advantage
differentiable policy parameterization
Dec-POMDP
🔎 Similar Papers
No similar papers found.
J
John Gardiner
Nasdaq, Inc.
Orlando Romero
Orlando Romero
AI & Quantitative Finance Fellow, Nasdaq Inc.
OptimizationMachine LearningDynamical Systems
B
Brendan Tivnan
Nasdaq, Inc.
N
Nicolò Dal Fabbro
Nasdaq, Inc., University of Pennsylvania
G
George J. Pappas
University of Pennsylvania