Fairness-Aware and Interpretable Policy Learning

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Algorithmic decision-making often faces a trade-off between fairness and interpretability. Method: This paper proposes a synergistic optimization framework that integrates sensitive-attribute decorrelation preprocessing with interpretable policy trees. It introduces a novel feature-space inverse transformation mechanism that mitigates the influence of sensitive attributes on decisions while preserving original feature semantics—ensuring policy transparency—and enhances fairness and prediction stability through structural tree optimization. Contribution/Results: Evaluated on Swiss labor market policy allocation, the method significantly improves group-level fairness—e.g., statistical parity increases by 23%—while incurring only a marginal reduction in employment rate (<1.5%). These results demonstrate its effectiveness and practical viability in real-world policy deployment.

Technology Category

Application Category

📝 Abstract
Fairness and interpretability play an important role in the adoption of decision-making algorithms across many application domains. These requirements are intended to avoid undesirable group differences and to alleviate concerns related to transparency. This paper proposes a framework that integrates fairness and interpretability into algorithmic decision making by combining data transformation with policy trees, a class of interpretable policy functions. The approach is based on pre-processing the data to remove dependencies between sensitive attributes and decision-relevant features, followed by a tree-based optimization to obtain the policy. Since data pre-processing compromises interpretability, an additional transformation maps the parameters of the resulting tree back to the original feature space. This procedure enhances fairness by yielding policy allocations that are pairwise independent of sensitive attributes, without sacrificing interpretability. Using administrative data from Switzerland to analyze the allocation of unemployed individuals to active labor market programs (ALMP), the framework is shown to perform well in a realistic policy setting. Effects of integrating fairness and interpretability constraints are measured through the change in expected employment outcomes. The results indicate that, for this particular application, fairness can be substantially improved at relatively low cost.
Problem

Research questions and friction points this paper is trying to address.

Integrating fairness and interpretability into algorithmic decision making
Removing dependencies between sensitive attributes and decision features
Enhancing policy allocations without sacrificing interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data transformation for fairness enhancement
Tree-based optimization for policy learning
Parameter mapping to original feature space
🔎 Similar Papers
No similar papers found.
N
Nora Bearth
University of St.Gallen, Rosenbergstrasse 22, 9000 St.Gallen, CH
Michael Lechner
Michael Lechner
University of St. Gallen, Swiss Insitute for Empirical Economic Research (SEW)
EconomicsCausal Machine LearningEconometricsSports EconomicsLabour Economics
J
Jana Mareckova
University of St.Gallen, Rosenbergstrasse 22, 9000 St.Gallen, CH
F
Fabian Muny
University of St.Gallen, Rosenbergstrasse 22, 9000 St.Gallen, CH