PoolFlip: A Multi-Agent Reinforcement Learning Security Environment for Cyber Defense

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing FlipIt frameworks rely on limited heuristics or attack-specific learning methods, exhibiting poor generalization and vulnerability to novel stealthy attacks. Method: We propose PoolFlip, a multi-agent reinforcement learning (MARL) simulation environment enabling dynamic adversarial modeling of flip-based control; and Flip-PSRO, a novel algorithm integrating population-based self-play (PSRO) with a control-rights-driven utility function to achieve robust, generalizable, and stable defense policies. Contribution/Results: Our core innovation lies in the tight integration of game-theoretic modeling with MARL—specifically, the first systematic incorporation of population-level adversarial training into the FlipIt paradigm. Experiments demonstrate that Flip-PSRO achieves a two-fold improvement in defense success rate against unseen heuristic attacks compared to baselines, while significantly increasing the average system control rate.

Technology Category

Application Category

📝 Abstract
Cyber defense requires automating defensive decision-making under stealthy, deceptive, and continuously evolving adversarial strategies. The FlipIt game provides a foundational framework for modeling interactions between a defender and an advanced adversary that compromises a system without being immediately detected. In FlipIt, the attacker and defender compete to control a shared resource by performing a Flip action and paying a cost. However, the existing FlipIt frameworks rely on a small number of heuristics or specialized learning techniques, which can lead to brittleness and the inability to adapt to new attacks. To address these limitations, we introduce PoolFlip, a multi-agent gym environment that extends the FlipIt game to allow efficient learning for attackers and defenders. Furthermore, we propose Flip-PSRO, a multi-agent reinforcement learning (MARL) approach that leverages population-based training to train defender agents equipped to generalize against a range of unknown, potentially adaptive opponents. Our empirical results suggest that Flip-PSRO defenders are $2 imes$ more effective than baselines to generalize to a heuristic attack not exposed in training. In addition, our newly designed ownership-based utility functions ensure that Flip-PSRO defenders maintain a high level of control while optimizing performance.
Problem

Research questions and friction points this paper is trying to address.

Automating cyber defense against stealthy adversarial strategies
Overcoming brittleness in existing FlipIt game frameworks
Training defenders to generalize against unknown adaptive opponents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent gym environment for FlipIt game
Population-based training for defender generalization
Ownership-based utility functions optimizing control performance
🔎 Similar Papers
No similar papers found.
X
Xavier Cadet
Dartmouth College, Hanover, NH 03755, USA
S
Simona Boboila
Northeastern University, Boston, MA 02115, USA
S
Sie Hendrata Dharmawan
Dartmouth College, Hanover, NH 03755, USA
Alina Oprea
Alina Oprea
Northeastern University
Computer SecurityAdversarial Machine LearningAI Security
P
Peter Chin
Dartmouth College, Hanover, NH 03755, USA