FairSAM: Fair Classification on Corrupted Data Through Sharpness-Aware Minimization

📅 2025-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Image classification models exhibit degraded overall robustness under noisy data (e.g., impulse or Gaussian noise) and, critically, suffer from pronounced performance disparities across demographic subgroups—exacerbating algorithmic unfairness. Existing robust learning methods (e.g., Sharpness-Aware Minimization, SAM) neglect subgroup fairness, while conventional fairness-aware approaches fail to maintain accuracy parity among subgroups under data corruption. Method: We propose the first unified framework jointly optimizing robustness and subgroup fairness. It comprises: (1) a noise-aware subgroup performance degradation metric; (2) fair-SAM—a novel integration of fairness constraints into the SAM objective to co-optimize worst-subgroup robustness and cross-group accuracy balance; and (3) noise-invariant feature regularization with subgroup-aware weighting. Results: Evaluated on multiple real-world datasets, our method significantly improves accuracy parity across subgroups under noise while preserving strong overall robustness, effectively mitigating the robustness–fairness trade-off.

Technology Category

Application Category

📝 Abstract
Image classification models trained on clean data often suffer from significant performance degradation when exposed to testing corrupted data, such as images with impulse noise, Gaussian noise, or environmental noise. This degradation not only impacts overall performance but also disproportionately affects various demographic subgroups, raising critical algorithmic bias concerns. Although robust learning algorithms like Sharpness-Aware Minimization (SAM) have shown promise in improving overall model robustness and generalization, they fall short in addressing the biased performance degradation across demographic subgroups. Existing fairness-aware machine learning methods - such as fairness constraints and reweighing strategies - aim to reduce performance disparities but hardly maintain robust and equitable accuracy across demographic subgroups when faced with data corruption. This reveals an inherent tension between robustness and fairness when dealing with corrupted data. To address these challenges, we introduce one novel metric specifically designed to assess performance degradation across subgroups under data corruption. Additionally, we propose extbf{FairSAM}, a new framework that integrates underline{Fair}ness-oriented strategies into underline{SAM} to deliver equalized performance across demographic groups under corrupted conditions. Our experiments on multiple real-world datasets and various predictive tasks show that FairSAM successfully reconciles robustness and fairness, offering a structured solution for equitable and resilient image classification in the presence of data corruption.
Problem

Research questions and friction points this paper is trying to address.

Address biased performance degradation across demographic subgroups under data corruption
Reconcile robustness and fairness in image classification with corrupted data
Propose FairSAM to ensure equitable performance under corrupted conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates fairness strategies into Sharpness-Aware Minimization
Novel metric for subgroup performance under corruption
Balances robustness and fairness in corrupted data
🔎 Similar Papers
No similar papers found.