🤖 AI Summary
This work addresses the unfair coverage problem of conformal prediction (CP) on data containing sensitive attributes. We formally define “conformal fairness” as a constraint on the disparity in marginal coverage across sensitive groups. We propose the first theoretically grounded conformal fairness framework, which relaxes the standard i.i.d. assumption to accommodate non-i.i.d. structured data—such as graphs. Our method integrates exchangeability assumptions, group-wise calibration, and adaptive confidence adjustment to jointly control both coverage validity and fairness. Experiments on graph and tabular datasets demonstrate that our approach strictly satisfies the theoretical coverage guarantee while reducing inter-group coverage disparity to within a user-specified threshold. It consistently outperforms existing baselines in both fairness and calibration fidelity.
📝 Abstract
Conformal Prediction (CP) is a popular method for uncertainty quantification with machine learning models. While conformal prediction provides probabilistic guarantees regarding the coverage of the true label, these guarantees are agnostic to the presence of sensitive attributes within the dataset. In this work, we formalize extit{Conformal Fairness}, a notion of fairness using conformal predictors, and provide a theoretically well-founded algorithm and associated framework to control for the gaps in coverage between different sensitive groups. Our framework leverages the exchangeability assumption (implicit to CP) rather than the typical IID assumption, allowing us to apply the notion of Conformal Fairness to data types and tasks that are not IID, such as graph data. Experiments were conducted on graph and tabular datasets to demonstrate that the algorithm can control fairness-related gaps in addition to coverage aligned with theoretical expectations.