Efficient Fairness-Performance Pareto Front Computation

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In representation learning, a fundamental trade-off exists between fairness and classification performance; however, existing methods lack principled means to assess whether the obtained fairness–accuracy Pareto curve approximates the true Pareto frontier of the underlying data distribution. This paper establishes the structural characterization of optimal fair representations, reducing the high-dimensional continuous optimization problem to a compact discrete formulation. Based on this structural analysis and discretization, we propose a model-agnostic method that computes the exact fairness–performance Pareto frontier without training complex downstream classifiers—relying solely on off-the-shelf concave–convex programming solvers. Evaluated across multiple real-world datasets, our approach efficiently yields precise Pareto fronts, substantially outperforming state-of-the-art representation learning algorithms. The resulting benchmark provides an interpretable, reproducible, and broadly applicable gold standard for evaluating fairness-aware algorithms.

Technology Category

Application Category

📝 Abstract
There is a well known intrinsic trade-off between the fairness of a representation and the performance of classifiers derived from the representation. Due to the complexity of optimisation algorithms in most modern representation learning approaches, for a given method it may be non-trivial to decide whether the obtained fairness-performance curve of the method is optimal, i.e., whether it is close to the true Pareto front for these quantities for the underlying data distribution. In this paper we propose a new method to compute the optimal Pareto front, which does not require the training of complex representation models. We show that optimal fair representations possess several useful structural properties, and that these properties enable a reduction of the computation of the Pareto Front to a compact discrete problem. We then also show that these compact approximating problems can be efficiently solved via off-the shelf concave-convex programming methods. Since our approach is independent of the specific model of representations, it may be used as the benchmark to which representation learning algorithms may be compared. We experimentally evaluate the approach on a number of real world benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Evaluating optimal fairness-performance trade-offs in representation learning
Computing Pareto front without training complex representation models
Providing benchmark for comparing representation learning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Computes Pareto front without training complex models
Reduces problem to compact discrete optimization
Solves with off-the-shelf concave-convex programming
🔎 Similar Papers
No similar papers found.