Strategic Classification With Externalities

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies strategic classification under inter-agent manipulative externalities: multiple agents can strategically manipulate their features to influence classification outcomes after the classifier is deployed, and their manipulations mutually affect one another. To model this multi-agent interaction, the paper formally introduces “cross-agent manipulative externalities” for the first time, proposing a joint framework combining a Stackelberg game (where the designer commits first) and a downstream Nash manipulation equilibrium (where agents best-respond simultaneously). Theoretically, it proves the existence and uniqueness of a pure-strategy Nash equilibrium in the manipulation game and establishes PAC learnability guarantees under adversarial manipulation. Algorithmically, it develops a differentiable optimization pipeline that enables robust classifier learning—generalizing effectively even when the number of agents is unknown and their manipulations converge toward equilibrium—while supporting efficient equilibrium computation and end-to-end training.

Technology Category

Application Category

📝 Abstract
We propose a new variant of the strategic classification problem: a principal reveals a classifier, and $n$ agents report their (possibly manipulated) features to be classified. Motivated by real-world applications, our model crucially allows the manipulation of one agent to affect another; that is, it explicitly captures inter-agent externalities. The principal-agent interactions are formally modeled as a Stackelberg game, with the resulting agent manipulation dynamics captured as a simultaneous game. We show that under certain assumptions, the pure Nash Equilibrium of this agent manipulation game is unique and can be efficiently computed. Leveraging this result, PAC learning guarantees are established for the learner: informally, we show that it is possible to learn classifiers that minimize loss on the distribution, even when a random number of agents are manipulating their way to a pure Nash Equilibrium. We also comment on the optimization of such classifiers through gradient-based approaches. This work sets the theoretical foundations for a more realistic analysis of classifiers that are robust against multiple strategic actors interacting in a common environment.
Problem

Research questions and friction points this paper is trying to address.

Strategic classification with inter-agent externalities
Unique Nash Equilibrium in agent manipulation
PAC learning guarantees for robust classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strategic classification with externalities
Stackelberg game modeling
PAC learning guarantees
🔎 Similar Papers
2024-05-03AAAI/ACM Conference on AI, Ethics, and SocietyCitations: 7