🤖 AI Summary
This paper addresses the challenge of ensuring machine learning fairness under resource constraints, where conventional fairness algorithms—ignoring such constraints—often fail in deployment. We propose the first “resource-constrained fairness” framework, formally defining and quantifying the resource cost required to achieve fairness. We reveal that resource availability fundamentally governs fairness cost and establish a unified evaluation and optimization paradigm jointly incorporating resource budgets and group fairness. Methodologically, we model resource consumption via classifier threshold tuning, and jointly optimize fairness metrics (e.g., equalized odds) subject to explicit resource constraints. Our approach integrates theoretical analysis—including derivation of tight theoretical bounds on achievable fairness under adjustable thresholds—with empirical validation. Results demonstrate that resource scarcity substantially increases fairness cost; the derived theoretical boundaries provide actionable design guidelines for fair model deployment in real-world, resource-limited settings.
📝 Abstract
Access to resources strongly constrains the decisions we make. While we might wish to offer every student a scholarship, or schedule every patient for follow-up meetings with a specialist, limited resources mean that this is not possible. When deploying machine learning systems, these resource constraints are simply enforced by varying the threshold of a classifier. However, these finite resource limitations are disregarded by most existing tools for fair machine learning, which do not allow the specification of resource limitations and do not remain fair when varying thresholds. This makes them ill-suited for real-world deployment. Our research introduces the concept of"resource-constrained fairness"and quantifies the cost of fairness within this framework. We demonstrate that the level of available resources significantly influences this cost, a factor overlooked in previous evaluations.