FAIRPLAI: A Human-in-the-Loop Approach to Fair and Private Machine Learning

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning faces a tripartite tension among fairness, privacy, and accuracy in high-stakes domains such as healthcare and finance: differential privacy (DP) may exacerbate group unfairness; fairness optimization is constrained by privacy requirements on sensitive attributes; and automation struggles to incorporate contextual human judgment. This paper proposes a human-in-the-loop framework that dynamically integrates human inputs into DP training, yielding an interactive ML pipeline unifying differential privacy, fairness-aware intervention, model interpretability, and active-learning–based auditing. We formally define the privacy–fairness Pareto frontier and enable stakeholders to navigate trade-offs on demand. Evaluated on benchmark datasets under strong DP guarantees (ε ≤ 1), our approach achieves significantly lower group unfairness than automated baselines while maintaining high accuracy and decision interpretability—providing a practical, transparent, and context-adaptive governance pathway for high-risk applications.

Technology Category

Application Category

📝 Abstract
As machine learning systems move from theory to practice, they are increasingly tasked with decisions that affect healthcare access, financial opportunities, hiring, and public services. In these contexts, accuracy is only one piece of the puzzle - models must also be fair to different groups, protect individual privacy, and remain accountable to stakeholders. Achieving all three is difficult: differential privacy can unintentionally worsen disparities, fairness interventions often rely on sensitive data that privacy restricts, and automated pipelines ignore that fairness is ultimately a human and contextual judgment. We introduce FAIRPLAI (Fair and Private Learning with Active Human Influence), a practical framework that integrates human oversight into the design and deployment of machine learning systems. FAIRPLAI works in three ways: (1) it constructs privacy-fairness frontiers that make trade-offs between accuracy, privacy guarantees, and group outcomes transparent; (2) it enables interactive stakeholder input, allowing decision-makers to select fairness criteria and operating points that reflect their domain needs; and (3) it embeds a differentially private auditing loop, giving humans the ability to review explanations and edge cases without compromising individual data security. Applied to benchmark datasets, FAIRPLAI consistently preserves strong privacy protections while reducing fairness disparities relative to automated baselines. More importantly, it provides a straightforward, interpretable process for practitioners to manage competing demands of accuracy, privacy, and fairness in socially impactful applications. By embedding human judgment where it matters most, FAIRPLAI offers a pathway to machine learning systems that are effective, responsible, and trustworthy in practice. GitHub: https://github.com/Li1Davey/Fairplai
Problem

Research questions and friction points this paper is trying to address.

Achieving fairness, privacy, and accuracy simultaneously in machine learning systems
Addressing conflicts between privacy protections and fairness interventions in ML
Integrating human oversight into automated ML systems for contextual fairness judgments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates human oversight into machine learning systems
Constructs privacy-fairness frontiers for transparent trade-offs
Enables interactive stakeholder input on fairness criteria
David Sanchez
David Sanchez
Serra Hunter Professor and ICREA-Acadèmia Researcher at Universitat Rovira i Virgili (URV)
SemanticsData privacyMachine learning
H
Holly Lopez
Dept. of Mathematics, Mountain View High School
M
Michelle Buraczyk
Dept. of Mathematics, El Paso Independent School District
A
Anantaa Kotal
Dept. of Computer Science, The University of Texas at El Paso