🤖 AI Summary
This paper addresses the challenges of vast strategy spaces and slow equilibrium convergence in imperfect-information games. Methodologically, it proposes an enhanced Tree-Exploiting Policy-Space Response Oracle (TE-PSRO) framework featuring: (1) implicit policy-edge representations based on deep reinforcement learning to enable scalable empirical game-tree modeling; (2) substitution of subgame-perfect equilibria (SPE) for standard Nash equilibria to guide strategic policy generation and exploration; and (3) strategy abstraction and refinement via generalized backward induction. Evaluated on canonical imperfect-information games—including alternating-offer bargaining—the approach achieves significantly accelerated equilibrium convergence while maintaining tractable time and memory complexity. Consequently, TE-PSRO enhances both the practicality and scalability of PSRO-based algorithms in complex, multi-stage imperfect-information settings.
📝 Abstract
Policy Space Response Oracles (PSRO) interleaves empirical game-theoretic analysis with deep reinforcement learning (DRL) to solve games too complex for traditional analytic methods. Tree-exploiting PSRO (TE-PSRO) is a variant of this approach that iteratively builds a coarsened empirical game model in extensive form using data obtained from querying a simulator that represents a detailed description of the game. We make two main methodological advances to TE-PSRO that enhance its applicability to complex games of imperfect information. First, we introduce a scalable representation for the empirical game tree where edges correspond to implicit policies learned through DRL. These policies cover conditions in the underlying game abstracted in the game model, supporting sustainable growth of the tree over epochs. Second, we leverage extensive form in the empirical model by employing refined Nash equilibria to direct strategy exploration. To enable this, we give a modular and scalable algorithm based on generalized backward induction for computing a subgame perfect equilibrium (SPE) in an imperfect-information game. We experimentally evaluate our approach on a suite of games including an alternating-offer bargaining game with outside offers; our results demonstrate that TE-PSRO converges toward equilibrium faster when new strategies are generated based on SPE rather than Nash equilibrium, and with reasonable time/memory requirements for the growing empirical model.