A Unifying Human-Centered AI Fairness Framework

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI deployment in high-stakes societal domains raises pressing fairness concerns, yet existing methods struggle to reconcile diverse fairness notions—individual vs. group, outcome vs. opportunity—with predictive accuracy. Method: We propose a human-centered, unified fairness framework that systematically integrates these two conceptual dichotomies into a single mathematical formalism, supporting eight computable metrics and explicitly modeling both marginal and intersectional fairness assumptions. The framework enables value-sensitive, multi-stakeholder negotiation over weighted fairness objectives, lowering barriers for non-expert practitioners. Contribution/Results: Evaluated across four real-world domains—income prediction, criminal justice, credit scoring, and healthcare—the framework quantifies trade-offs among fairness metrics, facilitates context-aware, value-aligned AI deployment decisions, and significantly enhances the interpretability and operationalizability of fairness practice.

Technology Category

Application Category

📝 Abstract
The increasing use of Artificial Intelligence (AI) in critical societal domains has amplified concerns about fairness, particularly regarding unequal treatment across sensitive attributes such as race, gender, and socioeconomic status. While there has been substantial work on ensuring AI fairness, navigating trade-offs between competing notions of fairness as well as predictive accuracy remains challenging, creating barriers to the practical deployment of fair AI systems. To address this, we introduce a unifying human-centered fairness framework that systematically covers eight distinct fairness metrics, formed by combining individual and group fairness, infra-marginal and intersectional assumptions, and outcome-based and equality-of-opportunity (EOO) perspectives. This structure allows stakeholders to align fairness interventions with their values and contextual considerations. The framework uses a consistent and easy-to-understand formulation for all metrics to reduce the learning curve for non-experts. Rather than privileging a single fairness notion, the framework enables stakeholders to assign weights across multiple fairness objectives, reflecting their priorities and facilitating multi-stakeholder compromises. We apply this approach to four real-world datasets: the UCI Adult census dataset for income prediction, the COMPAS dataset for criminal recidivism, the German Credit dataset for credit risk assessment, and the MEPS dataset for healthcare utilization. We show that adjusting weights reveals nuanced trade-offs between different fairness metrics. Finally, through case studies in judicial decision-making and healthcare, we demonstrate how the framework can inform practical and value-sensitive deployment of fair AI systems.
Problem

Research questions and friction points this paper is trying to address.

Addresses fairness trade-offs in AI systems across sensitive attributes.
Unifies multiple fairness metrics for human-centered AI deployment.
Enables stakeholder-driven weight assignment for fairness objectives.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered framework covering eight fairness metrics
Consistent formulation reduces learning curve for non-experts
Weight assignment enables multi-stakeholder compromises on fairness
🔎 Similar Papers
No similar papers found.
M
Munshi Mahbubur Rahman
Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, USA
Shimei Pan
Shimei Pan
University of Maryland - Baltimore County
NLPSocial Media AnalyticsAIIntelligent User Interfaces
J
James R. Foulds
Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, USA