Decomposing Direct and Indirect Biases in Linear Models under Demographic Parity Constraint

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the unclear sources of bias in linear models under demographic parity constraints, this paper proposes the first post-hoc fairness intervention framework that requires no model retraining. It explicitly decomposes bias into the direct effect of sensitive attributes and the indirect effects mediated through correlated features. Leveraging analytical derivation and statistical modeling, the method rigorously characterizes how fairness constraints reshape model coefficients and alter feature-level bias distributions. Unlike prior approaches, it imposes no strong distributional assumptions and retains explicit dependence on the sensitive attribute, thereby enhancing transparency and interpretability of fairness interventions. Experiments on synthetic and real-world datasets demonstrate that the framework captures dynamic fairness behaviors overlooked by existing methods, offering a practical, actionable tool for model auditing and bias attribution.

Technology Category

Application Category

📝 Abstract
Linear models are widely used in high-stakes decision-making due to their simplicity and interpretability. Yet when fairness constraints such as demographic parity are introduced, their effects on model coefficients, and thus on how predictive bias is distributed across features, remain opaque. Existing approaches on linear models often rely on strong and unrealistic assumptions, or overlook the explicit role of the sensitive attribute, limiting their practical utility for fairness assessment. We extend the work of (Chzhen and Schreuder, 2022) and (Fukuchi and Sakuma, 2023) by proposing a post-processing framework that can be applied on top of any linear model to decompose the resulting bias into direct (sensitive-attribute) and indirect (correlated-features) components. Our method analytically characterizes how demographic parity reshapes each model coefficient, including those of both sensitive and non-sensitive features. This enables a transparent, feature-level interpretation of fairness interventions and reveals how bias may persist or shift through correlated variables. Our framework requires no retraining and provides actionable insights for model auditing and mitigation. Experiments on both synthetic and real-world datasets demonstrate that our method captures fairness dynamics missed by prior work, offering a practical and interpretable tool for responsible deployment of linear models.
Problem

Research questions and friction points this paper is trying to address.

Decomposing bias into direct and indirect components in linear models
Analyzing how demographic parity constraints redistribute bias across features
Providing interpretable fairness assessment without model retraining requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes bias into direct and indirect components
Analytically characterizes coefficient reshaping under fairness
Provides post-processing framework without model retraining
🔎 Similar Papers
No similar papers found.
B
Bertille Tierny
Milliman France, R&D Department, AI Lab
Arthur Charpentier
Arthur Charpentier
Université du Québec à Montréal
Riskinsurancepredictive modelingcomputational statisticsactuarial science
F
François Hu
Milliman France, R&D Department, AI Lab