🤖 AI Summary
In predictive business process monitoring, sensitive attributes (e.g., gender, age) can induce unfair predictions—especially when used naively, without contextual grounding. To address this, we propose a human-in-the-loop fairness enhancement framework: (1) knowledge distillation from neural networks to decision trees enables interpretable modeling and precise identification of bias-inducing paths driven by sensitive attributes; (2) expert feedback guides model fine-tuning, supporting context-aware masking and reweighting of sensitive features. Crucially, our approach preserves the original process log structure and jointly optimizes fairness and predictive accuracy. Experiments on multiple real-world process datasets demonstrate substantial reductions in group fairness disparities—e.g., ΔDP and ΔEO—while maintaining high predictive performance (AUC degradation <1.2%). The method thus establishes an interpretable, intervention-enabled fairness assurance framework for trustworthy process intelligence.
📝 Abstract
Sensitive attributes like gender or age can lead to unfair predictions in machine learning tasks such as predictive business process monitoring, particularly when used without considering context. We present FairLoop1, a tool for human-guided bias mitigation in neural network-based prediction models. FairLoop distills decision trees from neural networks, allowing users to inspect and modify unfair decision logic, which is then used to fine-tune the original model towards fairer predictions. Compared to other approaches to fairness, FairLoop enables context-aware bias removal through human involvement, addressing the influence of sensitive attributes selectively rather than excluding them uniformly.