🤖 AI Summary
Existing preprocessing methods in fair supervised learning often suffer from over-regularization, neglecting downstream task characteristics and struggling to balance utility and fairness. This work proposes a task-aware preprocessing framework that explicitly incorporates downstream supervised task information when constructing data transformation mappings. Fairness is measured via the Hirschfeld–Gebelein–Rényi (HGR) correlation, and theoretical conditions are derived to simultaneously guarantee improved fairness and preserved model utility. Notably, this approach provides the first theoretical guarantees for both fairness and utility across arbitrary downstream models, circumventing the over-regularization pitfalls of conventional methods. Experiments demonstrate consistent fairness–utility trade-offs on both tabular and image data, with visual tasks showing modifications limited to semantic features essential to the primary task.
📝 Abstract
Fairness-aware machine learning has recently attracted various communities to mitigate discrimination against certain societal groups in data-driven tasks. For fair supervised learning, particularly in pre-processing, there have been two main categories: data fairness and task-tailored fairness. The former directly finds an intermediate distribution among the groups, independent of the type of the downstream model, so a learned downstream classification/regression model returns similar predictive scores to individuals inputting the same covariates irrespective of their sensitive attributes. The latter explicitly takes the supervised learning task into account when constructing the pre-processing map. In this work, we study algorithmic fairness for supervised learning and argue that the data fairness approaches impose overly strong regularization from the perspective of the HGR correlation. This motivates us to devise a novel pre-processing approach tailored to supervised learning. We account for the trade-off between fairness and utility in obtaining the pre-processing map. Then we study the behavior of arbitrary downstream supervised models learned on the transformed data to find sufficient conditions to guarantee their fairness improvement and utility preservation. To our knowledge, no prior work in the branch of task-tailored methods has theoretically investigated downstream guarantees when using pre-processed data. We further evaluate our framework through comparison studies based on tabular and image data sets, showing the superiority of our framework which preserves consistent trade-offs among multiple downstream models compared to recent competing models. Particularly for computer vision data, we see our method alters only necessary semantic features related to the central machine learning task to achieve fairness.