Integrating Fairness and Model Pruning Through Bi-level Optimization

📅 2023-12-15
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Model pruning often exacerbates algorithmic bias and undermines social fairness. Method: This paper proposes the first end-to-end differentiable fair structured pruning paradigm, jointly optimizing pruning masks and weight updates via bilevel optimization. It explicitly incorporates fairness constraints—such as statistical parity—into the structured pruning framework for the first time, enabling simultaneous optimization of accuracy, fairness, and sparsity. A gradient-based fair regularization scheme and alternating optimization strategy ensure differentiability and training efficiency. Results: On multiple benchmark datasets, the method achieves high sparsity (70–90%) while incurring <1.5% accuracy degradation; moreover, it reduces equalized odds (EO) disparity by 40–65%, significantly outperforming existing pruning approaches in fairness-aware compression.
📝 Abstract
Deep neural networks have achieved exceptional results across a range of applications. As the demand for efficient and sparse deep learning models escalates, the significance of model compression, particularly pruning, is increasingly recognized. Traditional pruning methods, however, can unintentionally intensify algorithmic biases, leading to unequal prediction outcomes in critical applications and raising concerns about the dilemma of pruning practices and social justice. To tackle this challenge, we introduce a novel concept of fair model pruning, which involves developing a sparse model that adheres to fairness criteria. In particular, we propose a framework to jointly optimize the pruning mask and weight update processes with fairness constraints. This framework is engineered to compress models that maintain performance while ensuring fairness in a unified process. To this end, we formulate the fair pruning problem as a novel constrained bi-level optimization task and derive efficient and effective solving strategies. We design experiments across various datasets and scenarios to validate our proposed method. Our empirical analysis contrasts our framework with several mainstream pruning strategies, emphasizing our method's superiority in maintaining model fairness, performance, and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Balancing fairness and model pruning via bi-level optimization
Addressing algorithmic biases in traditional pruning methods
Ensuring model performance and fairness in compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-level optimization for fair pruning
Joint pruning mask and weight optimization
Fairness-constrained model compression framework
🔎 Similar Papers
No similar papers found.
Y
Yucong Dai
Clemson University, Clemson, SC 29634, USA
G
Gen Li
Clemson University, Clemson, SC 29634, USA
F
Feng Luo
Clemson University, Clemson, SC 29634, USA
Xiaolong Ma
Xiaolong Ma
Assistant Professor, The University of Arizona
Deep LearningComputer VisionEfficient Learning SystemTrustworthy AI
Yongkai Wu
Yongkai Wu
Clemson University
Machine LearningData MiningCausal InferenceAlgorithmic FairnessAI Ethics