Overcoming Fairness Trade-offs via Pre-processing: A Causal Perspective

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two core challenges in algorithmic fairness: the trade-off between fairness and predictive accuracy, and conflicts among multiple fairness criteria. To this end, we propose FiND (Fictional and Normative Desideratum), a causally grounded framework defining an ideal fair world wherein protected attributes exert no causal influence on the target variable. We theoretically establish that classical fairness metrics are inherently compatible within the FiND world, and that fairness and predictive performance become aligned. Moreover, we introduce the first quantitative measure of FiND-world approximation, overcoming the validation bottleneck arising from unobservable fairness baselines. Leveraging causal-graph-guided data reweighting and counterfactual generation as preprocessing techniques, our approach achieves significant improvements in statistical parity and equal opportunity on both synthetic and real-world datasets—while preserving model accuracy—thereby resolving the fairness–accuracy trade-off and multi-criteria incompatibility problems.

Technology Category

Application Category

📝 Abstract
Training machine learning models for fair decisions faces two key challenges: The emph{fairness-accuracy trade-off} results from enforcing fairness which weakens its predictive performance in contrast to an unconstrained model. The incompatibility of different fairness metrics poses another trade-off -- also known as the emph{impossibility theorem}. Recent work identifies the bias within the observed data as a possible root cause and shows that fairness and predictive performance are in fact in accord when predictive performance is measured on unbiased data. We offer a causal explanation for these findings using the framework of the FiND (fictitious and normatively desired) world, a"fair"world, where protected attributes have no causal effects on the target variable. We show theoretically that (i) classical fairness metrics deemed to be incompatible are naturally satisfied in the FiND world, while (ii) fairness aligns with high predictive performance. We extend our analysis by suggesting how one can benefit from these theoretical insights in practice, using causal pre-processing methods that approximate the FiND world. Additionally, we propose a method for evaluating the approximation of the FiND world via pre-processing in practical use cases where we do not have access to the FiND world. In simulations and empirical studies, we demonstrate that these pre-processing methods are successful in approximating the FiND world and resolve both trade-offs. Our results provide actionable solutions for practitioners to achieve fairness and high predictive performance simultaneously.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning Fairness
Predictive Performance
Fairness Criteria Conflict
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fairness in Machine Learning
Ideal Unbiased World (FiND)
Bias Mitigation Techniques
🔎 Similar Papers
No similar papers found.
C
Charlotte Leininger
LMU Munich, Germany
S
Simon Rittel
LMU Munich, Germany and Munich Center for Machine Learning (MCML), Germany
Ludwig Bothmann
Ludwig Bothmann
Postdoctoral Researcher, LMU Munich
StatisticsFairness-aware MLInterpretable MLCausal InferenceComputer Vision