🤖 AI Summary
This paper addresses the sequential Stackelberg decision problem in leader-follower general-sum stochastic games (LF-GSSGs), aiming to efficiently compute strong Stackelberg equilibria (SSEs). To overcome the challenge that existing methods struggle to simultaneously ensure theoretical rigor and computational scalability, we establish the first formal result showing that LF-GSSGs can be losslessly reduced to state-abstraction Markov decision processes grounded in a “credible set”—a state-dependent collection of follower rational responses. Leveraging this reduction, we propose a novel dynamic programming framework and design a Bellman recursion algorithm with ε-optimality guarantees. Our key contribution lies in explicitly modeling follower rationality as a state-dependent credible policy set, thereby enabling a compact characterization of asymmetric commitment structures. Experiments on security games and mixed-motive resource allocation benchmarks demonstrate substantial improvements in both leader utility and computational efficiency over state-of-the-art algorithms.
📝 Abstract
Leader-follower general-sum stochastic games (LF-GSSGs) model sequential decision-making under asymmetric commitment, where a leader commits to a policy and a follower best responds, yielding a strong Stackelberg equilibrium (SSE) with leader-favourable tie-breaking. This paper introduces a dynamic programming (DP) framework that applies Bellman recursion over credible sets-state abstractions formally representing all rational follower best responses under partial leader commitments-to compute SSEs. We first prove that any LF-GSSG admits a lossless reduction to a Markov decision process (MDP) over credible sets. We further establish that synthesising an optimal memoryless deterministic leader policy is NP-hard, motivating the development of ε-optimal DP algorithms with provable guarantees on leader exploitability. Experiments on standard mixed-motive benchmarks-including security games, resource allocation, and adversarial planning-demonstrate empirical gains in leader value and runtime scalability over state-of-the-art methods.