A Generic Framework for Conformal Fairness

📅 2025-05-22
🏛️ International Conference on Learning Representations
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unfair coverage problem of conformal prediction (CP) on data containing sensitive attributes. We formally define “conformal fairness” as a constraint on the disparity in marginal coverage across sensitive groups. We propose the first theoretically grounded conformal fairness framework, which relaxes the standard i.i.d. assumption to accommodate non-i.i.d. structured data—such as graphs. Our method integrates exchangeability assumptions, group-wise calibration, and adaptive confidence adjustment to jointly control both coverage validity and fairness. Experiments on graph and tabular datasets demonstrate that our approach strictly satisfies the theoretical coverage guarantee while reducing inter-group coverage disparity to within a user-specified threshold. It consistently outperforms existing baselines in both fairness and calibration fidelity.

Technology Category

Application Category

📝 Abstract
Conformal Prediction (CP) is a popular method for uncertainty quantification with machine learning models. While conformal prediction provides probabilistic guarantees regarding the coverage of the true label, these guarantees are agnostic to the presence of sensitive attributes within the dataset. In this work, we formalize extit{Conformal Fairness}, a notion of fairness using conformal predictors, and provide a theoretically well-founded algorithm and associated framework to control for the gaps in coverage between different sensitive groups. Our framework leverages the exchangeability assumption (implicit to CP) rather than the typical IID assumption, allowing us to apply the notion of Conformal Fairness to data types and tasks that are not IID, such as graph data. Experiments were conducted on graph and tabular datasets to demonstrate that the algorithm can control fairness-related gaps in addition to coverage aligned with theoretical expectations.
Problem

Research questions and friction points this paper is trying to address.

Ensuring fairness in conformal prediction across sensitive groups
Extending fairness guarantees to non-IID data like graphs
Controlling coverage gaps between groups with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal Fairness framework for sensitive groups
Leverages exchangeability beyond IID assumptions
Applies to non-IID data like graph datasets
🔎 Similar Papers
No similar papers found.