FAIR-SIGHT: Fairness Assurance in Image Recognition via Simultaneous Conformal Thresholding and Dynamic Output Repair

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses group- and individual-level fairness disparities in image recognition systems. We propose a post-hoc fairness assurance framework that requires no model retraining and is parameter-agnostic. Methodologically, we jointly model prediction error and fairness bias, introducing a fairness-aware nonconformity scoring mechanism. To our knowledge, this is the first approach to integrate conformal prediction with dynamic output repair, enabling distribution-free fairness error control under limited samples. Furthermore, we incorporate logit-space bias correction and confidence-based dynamic recalibration, supported by theoretical convergence analysis. Experiments across multiple benchmark datasets demonstrate substantial reductions in fairness gaps—e.g., a 38% decrease in Equalized Odds—while preserving over 99% of the original model’s accuracy.

Technology Category

Application Category

📝 Abstract
We introduce FAIR-SIGHT, an innovative post-hoc framework designed to ensure fairness in computer vision systems by combining conformal prediction with a dynamic output repair mechanism. Our approach calculates a fairness-aware non-conformity score that simultaneously assesses prediction errors and fairness violations. Using conformal prediction, we establish an adaptive threshold that provides rigorous finite-sample, distribution-free guarantees. When the non-conformity score for a new image exceeds the calibrated threshold, FAIR-SIGHT implements targeted corrective adjustments, such as logit shifts for classification and confidence recalibration for detection, to reduce both group and individual fairness disparities, all without the need for retraining or having access to internal model parameters. Comprehensive theoretical analysis validates our method's error control and convergence properties. At the same time, extensive empirical evaluations on benchmark datasets show that FAIR-SIGHT significantly reduces fairness disparities while preserving high predictive performance.
Problem

Research questions and friction points this paper is trying to address.

Ensures fairness in computer vision systems post-hoc
Combines conformal prediction with dynamic output repair
Reduces group and individual fairness disparities without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines conformal prediction with dynamic output repair
Calculates fairness-aware non-conformity score for errors
Implements corrective adjustments without retraining model
🔎 Similar Papers
No similar papers found.
Arya Fayyazi
Arya Fayyazi
Research Assistant, University of Southern California
Machine LearningHardware/Software Co-optimizationEDAAI FairnessML Compiler
M
M. Kamal
University of Southern California, Los Angeles, California, USA
M
M. Pedram
University of Southern California, Los Angeles, California, USA