FairContrast: Enhancing Fairness through Contrastive learning and Customized Augmenting Methods on Tabular Data

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised and contrastive learning methods for tabular data representation learning inadequately address fairness, lacking explicit debiasing mechanisms. To bridge this gap, we propose F-TabCLR, a fairness-aware contrastive learning framework for tabular data. F-TabCLR introduces a sensitive-attribute-aware positive sample selection strategy and integrates supervised signals with tailored tabular augmentations—including numerical perturbation and class reweighting—to explicitly disentangle sensitive information from task-relevant features in the representation space. Extensive experiments on multiple real-world tabular datasets demonstrate that F-TabCLR significantly reduces group- and individual-level fairness disparities (e.g., ΔSP and ΔEO improve by 15–32%), while maintaining or even enhancing downstream predictive accuracy. To our knowledge, F-TabCLR is the first framework to achieve joint optimization of fairness and prediction performance in tabular representation learning.

Technology Category

Application Category

📝 Abstract
As AI systems become more embedded in everyday life, the development of fair and unbiased models becomes more critical. Considering the social impact of AI systems is not merely a technical challenge but a moral imperative. As evidenced in numerous research studies, learning fair and robust representations has proven to be a powerful approach to effectively debiasing algorithms and improving fairness while maintaining essential information for prediction tasks. Representation learning frameworks, particularly those that utilize self-supervised and contrastive learning, have demonstrated superior robustness and generalizability across various domains. Despite the growing interest in applying these approaches to tabular data, the issue of fairness in these learned representations remains underexplored. In this study, we introduce a contrastive learning framework specifically designed to address bias and learn fair representations in tabular datasets. By strategically selecting positive pair samples and employing supervised and self-supervised contrastive learning, we significantly reduce bias compared to existing state-of-the-art contrastive learning models for tabular data. Our results demonstrate the efficacy of our approach in mitigating bias with minimum trade-off in accuracy and leveraging the learned fair representations in various downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Addressing bias in tabular data using contrastive learning methods
Developing fair representations while maintaining prediction task accuracy
Reducing algorithmic bias through customized sample selection strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contrastive learning for tabular data fairness
Selects positive pairs strategically to reduce bias
Combines supervised and self-supervised contrastive learning methods
🔎 Similar Papers
No similar papers found.
A
Aida Tayebi
Department of Industrial Engineering, University of Central Florida, Orlando, FL 32816
A
Ali Khodabandeh Yalabadi
Department of Industrial Engineering, University of Central Florida, Orlando, FL 32816
Mehdi Yazdani-Jahromi
Mehdi Yazdani-Jahromi
University of Central Florida
artificial intelligencecomputational drug discoveryalgorithmic fairness
Ozlem Ozmen Garibay
Ozlem Ozmen Garibay
Assistant Professor, University of Central Florida