🤖 AI Summary
This paper investigates the trade-offs among decision-maker accuracy, societal improvement, and individual fairness in nonlinear strategic learning. Unlike prior work restricted to linear settings and globally optimal responses, we introduce the first model wherein individuals respond nonlinearly based only on local information about the decision policy. We theoretically establish that—under nonlinearity—these three welfare objectives are fundamentally incompatible: simultaneous optimality is impossible. To address this, we propose the first irreducible multi-welfare balancing optimization algorithm, grounded in generalized optimal response modeling, welfare Pareto frontier analysis, and a multi-objective optimization framework. Extensive experiments on synthetic and real-world datasets demonstrate that our algorithm significantly improves the equilibrium across all three welfare dimensions, consistently outperforming single-objective baselines.
📝 Abstract
This paper studies algorithmic decision-making in the presence of strategic individual behaviors, where an ML model is used to make decisions about human agents and the latter can adapt their behavior strategically to improve their future data. Existing results on strategic learning have largely focused on the linear setting where agents with linear labeling functions best respond to a (noisy) linear decision policy. Instead, this work focuses on general non-linear settings where agents respond to the decision policy with only "local information" of the policy. Moreover, we simultaneously consider objectives of maximizing decision-maker welfare (model prediction accuracy), social welfare (agent improvement caused by strategic behaviors), and agent welfare (the extent that ML underestimates the agents). We first generalize the agent best response model in previous works to the non-linear setting and then investigate the compatibility of welfare objectives. We show the three welfare can attain the optimum simultaneously only under restrictive conditions which are challenging to achieve in non-linear settings. The theoretical results imply that existing works solely maximizing the welfare of a subset of parties usually diminish the welfare of others. We thus claim the necessity of balancing the welfare of each party in non-linear settings and propose an irreducible optimization algorithm suitable for general strategic learning. Experiments on synthetic and real data validate the proposed algorithm.