A Hierarchical Importance-Guided Multi-objective Evolutionary Framework for Deep Neural Network Pruning

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high-dimensional, large-scale, and highly non-convex optimization challenges inherent in pruning over-parameterized deep neural networks by proposing a layer-wise importance-guided multi-objective evolutionary framework. The method formulates pruning as a large-scale multi-objective optimization problem and employs a two-stage cooperative mechanism combining coarse-grained continuous search with fine-grained binary optimization. Leveraging importance-aware sampling, adaptive mutation, and Pareto-front refinement, it efficiently explores the exponential decision space to identify Pareto-optimal solutions balancing model accuracy and compactness. Experiments on CIFAR-10 and CIFAR-100 demonstrate that the approach achieves parameter compression rates of up to 51.9% for ResNet-56 and 38.9% for ResNet-110 with negligible accuracy loss, significantly outperforming existing evolutionary pruning methods.
📝 Abstract
The optimization of over-parameterized deep neural networks represents a large-scale, high-dimensional, and strongly non-convex decision problem that challenges existing optimization frameworks. Current evolutionary and gradient-based pruning methods often struggle to scale to such dimensionalities, as they rely on flat search spaces, scalarized objectives, or repeated retraining, leading to premature convergence and prohibitive computational cost. This paper introduces a hierarchical importance-guided evolutionary framework that reformulates convolutional network pruning as a tractable large-scale multi-objective optimization problem. In the first phase, a continuous evolutionary search performs coarse exploration of weight-wise pruning thresholds to shrink the search space and identify promising regions of the Pareto set. The second phase applies a fine-grained binary evolutionary optimization constrained to the surviving weights, where importance-aware sampling and adaptive variation operators refine local search in the sparse region of the Pareto set. This hierarchical design combines global exploration and localized exploitation to achieve a well-distributed Pareto set of networks balancing compactness and accuracy. Empirical results on CIFAR-10 and CIFAR-100 using ResNet-56 and ResNet-110 confirm the method's effectiveness compared to existing evolutionary approaches: pruning achieves up to 51.9\% and 38.9\% parameter reductions with almost no accuracy loss compared to state-of-the-art evolutionary DNN pruning methods. The proposed method contributes a scalable evolutionary approach for solving very-large-scale multi-objective optimization problems, offering a general paradigm extendable to other domains where the decision space is exponentially large, objective functions are conflicting, and efficient trade-off discovery is essential.
Problem

Research questions and friction points this paper is trying to address.

deep neural network pruning
multi-objective optimization
large-scale optimization
evolutionary algorithms
Pareto set
Innovation

Methods, ideas, or system contributions that make the work stand out.

hierarchical evolutionary optimization
importance-guided pruning
multi-objective DNN pruning
Pareto set refinement
large-scale neural network compression
🔎 Similar Papers
No similar papers found.
Z
Zak Khan
Department of Computer Science and Physics, Wilfrid Laurier University, Waterloo, N2L 3C5, Ontario, Canada
Azam Asilian Bidgoli
Azam Asilian Bidgoli
Assistant Professor, Wilfrid Laurier University, Canada
Machine LearningMulti-objective OptimizationEvolutionary ComputationFeature Selection