FairLoop: Software Support for Human-Centric Fairness in Predictive Business Process Monitoring

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In predictive business process monitoring, sensitive attributes (e.g., gender, age) can induce unfair predictions—especially when used naively, without contextual grounding. To address this, we propose a human-in-the-loop fairness enhancement framework: (1) knowledge distillation from neural networks to decision trees enables interpretable modeling and precise identification of bias-inducing paths driven by sensitive attributes; (2) expert feedback guides model fine-tuning, supporting context-aware masking and reweighting of sensitive features. Crucially, our approach preserves the original process log structure and jointly optimizes fairness and predictive accuracy. Experiments on multiple real-world process datasets demonstrate substantial reductions in group fairness disparities—e.g., ΔDP and ΔEO—while maintaining high predictive performance (AUC degradation <1.2%). The method thus establishes an interpretable, intervention-enabled fairness assurance framework for trustworthy process intelligence.

Technology Category

Application Category

📝 Abstract
Sensitive attributes like gender or age can lead to unfair predictions in machine learning tasks such as predictive business process monitoring, particularly when used without considering context. We present FairLoop1, a tool for human-guided bias mitigation in neural network-based prediction models. FairLoop distills decision trees from neural networks, allowing users to inspect and modify unfair decision logic, which is then used to fine-tune the original model towards fairer predictions. Compared to other approaches to fairness, FairLoop enables context-aware bias removal through human involvement, addressing the influence of sensitive attributes selectively rather than excluding them uniformly.
Problem

Research questions and friction points this paper is trying to address.

Mitigating unfair predictions from sensitive attributes
Enabling human-guided bias removal in neural networks
Addressing context-aware fairness without uniform attribute exclusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-guided bias mitigation tool
Distills decision trees from neural networks
Enables context-aware fairness through human involvement