A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring

📅 2025-08-24
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
In predictive business process monitoring, machine learning models often exhibit discriminatory predictions against sensitive attributes (e.g., gender, age) due to data bias. Existing debiasing approaches naively remove sensitive features, overlooking their context-dependent dual role—sometimes contributing to fairness, sometimes to unfairness—within the same process. This paper proposes a model-agnostic, human-in-the-loop fairness enhancement framework. It first distills the original model’s decision logic into an interpretable decision tree; then, incorporating expert feedback, it dynamically identifies and rectifies unfair usage paths of sensitive attributes within specific process contexts. Experiments demonstrate that our method preserves high predictive accuracy while significantly improving group fairness metrics—including statistical parity and equal opportunity—thereby achieving an effective accuracy–fairness trade-off.

Technology Category

Application Category

📝 Abstract
Predictive process monitoring enables organizations to proactively react and intervene in running instances of a business process. Given an incomplete process instance, predictions about the outcome, next activity, or remaining time are created. This is done by powerful machine learning models, which have shown impressive predictive performance. However, the data-driven nature of these models makes them susceptible to finding unfair, biased, or unethical patterns in the data. Such patterns lead to biased predictions based on so-called sensitive attributes, such as the gender or age of process participants. Previous work has identified this problem and offered solutions that mitigate biases by removing sensitive attributes entirely from the process instance. However, sensitive attributes can be used both fairly and unfairly in the same process instance. For example, during a medical process, treatment decisions could be based on gender, while the decision to accept a patient should not be based on gender. This paper proposes a novel, model-agnostic approach for identifying and rectifying biased decisions in predictive business process monitoring models, even when the same sensitive attribute is used both fairly and unfairly. The proposed approach uses a human-in-the-loop approach to differentiate between fair and unfair decisions through simple alterations on a decision tree model distilled from the original prediction model. Our results show that the proposed approach achieves a promising tradeoff between fairness and accuracy in the presence of biased data. All source code and data are publicly available at https://doi.org/10.5281/zenodo.15387576.
Problem

Research questions and friction points this paper is trying to address.

Addressing biased predictions in predictive business process monitoring models
Differentiating fair and unfair uses of sensitive attributes in processes
Achieving tradeoff between fairness and accuracy with human input
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-in-the-loop approach for bias identification
Model-agnostic technique using decision tree alterations
Differentiates fair and unfair sensitive attribute usage