🤖 AI Summary
This paper addresses the gap between AI fairness theory and judicial practice by examining how U.S. courts apply constitutional principles—specifically due process and equal protection—to assess the fairness of recidivism risk assessment (RRA) tools. Method: It develops an interdisciplinary analytical framework that integrates judicial review standards (strict, intermediate, and rational basis scrutiny) with three AI fairness criteria (procedural, group, and individual fairness), employing legal empirical analysis, constitutional interpretation, and policy-text mapping. Contribution/Results: The study reveals a differentiated regulatory logic governing demographic feature usage in RRAs. While technical fairness standards can meaningfully anchor constitutional requirements, current frameworks systematically neglect individual fairness and exhibit tensions in procedural fairness application. In response, the paper proposes a “judicial-review-adapted” fairness mapping framework, offering actionable jurisprudential pathways and institutional interfaces for designing constitutionally compliant RRAs.
📝 Abstract
The AI/HCI and legal communities have developed largely independent conceptualizations of fairness. This conceptual difference hinders the potential incorporation of technical fairness criteria (e.g., procedural, group, and individual fairness) into sustainable policies and designs, particularly for high-stakes applications like recidivism risk assessment. To foster common ground, we conduct legal research to identify if and how technical AI conceptualizations of fairness surface in primary legal sources. We find that while major technical fairness criteria can be linked to constitutional mandates such as ``Due Process'' and ``Equal Protection'' thanks to judicial interpretation, several challenges arise when operationalizing them into concrete statutes/regulations. These policies often adopt procedural and group fairness but ignore the major technical criterion of individual fairness. Regarding procedural fairness, judicial ``scrutiny'' categories are relevant but may not fully capture how courts scrutinize the use of demographic features in potentially discriminatory government tools like RRA. Furthermore, some policies contradict each other on whether to apply procedural fairness to certain demographic features. Thus, we propose a new framework, integrating demographics-related legal scrutiny concepts and technical fairness criteria.