🤖 AI Summary
This paper examines the effectiveness and welfare value of machine learning prediction techniques in identifying the most vulnerable populations—such as the long-term unemployed—for fairness-oriented social assistance, systematically comparing them against traditional policy levers like administrative capacity expansion.
Method: We develop a novel theoretical framework that jointly models prediction accuracy and institutional constraints (e.g., bureaucratic capacity) within a unified welfare analysis, and propose a principle for evaluating predictive value specifically for the worst-off. Using German microdata, we conduct counterfactual policy simulations and sensitivity analyses grounded in mathematical modeling and causal inference.
Contribution/Results: Under fixed resource constraints, optimizing prediction systems significantly increases assistance coverage among the bottom 10% of the income distribution. The framework provides both actionable evaluation tools and empirical evidence to guide the responsible deployment of fair AI in public policy.
📝 Abstract
Machine learning is increasingly used in government programs to identify and support the most vulnerable individuals, prioritizing assistance for those at greatest risk over optimizing aggregate outcomes. This paper examines the welfare impacts of prediction in equity-driven contexts, and how they compare to other policy levers, such as expanding bureaucratic capacity. Through mathematical models and a real-world case study on long-term unemployment amongst German residents, we develop a comprehensive understanding of the relative effectiveness of prediction in surfacing the worst-off. Our findings provide clear analytical frameworks and practical, data-driven tools that empower policymakers to make principled decisions when designing these systems.