Variance-Based Pruning for Accelerating and Compressing Trained Networks

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high latency, computational overhead, and memory consumption in large-model deployment, this paper proposes a structured pruning method that avoids extensive retraining. Our approach automatically identifies redundant channels via neuron activation variance and performs structured pruning accordingly. To mitigate accuracy degradation, we introduce a mean activation compensation mechanism. Furthermore, only minimal fine-tuning—just ten epochs—is required for rapid accuracy recovery. On ImageNet-1k, the pruned DeiT-Base model retains over 70% of its original top-1 accuracy without any fine-tuning; after ten fine-tuning epochs, it recovers 99% of the original accuracy. The method reduces MACs by 35%, compresses parameters by 36%, and accelerates inference by 1.44×. Our approach achieves an effective balance among efficiency, practicality, and accuracy robustness.

Technology Category

Application Category

📝 Abstract
Increasingly expensive training of ever larger models such as Vision Transfomers motivate reusing the vast library of already trained state-of-the-art networks. However, their latency, high computational costs and memory demands pose significant challenges for deployment, especially on resource-constrained hardware. While structured pruning methods can reduce these factors, they often require costly retraining, sometimes for up to hundreds of epochs, or even training from scratch to recover the lost accuracy resulting from the structural modifications. Maintaining the provided performance of trained models after structured pruning and thereby avoiding extensive retraining remains a challenge. To solve this, we introduce Variance-Based Pruning, a simple and structured one-shot pruning technique for efficiently compressing networks, with minimal finetuning. Our approach first gathers activation statistics, which are used to select neurons for pruning. Simultaneously the mean activations are integrated back into the model to preserve a high degree of performance. On ImageNet-1k recognition tasks, we demonstrate that directly after pruning DeiT-Base retains over 70% of its original performance and requires only 10 epochs of fine-tuning to regain 99% of the original accuracy while simultaneously reducing MACs by 35% and model size by 36%, thus speeding up the model by 1.44x.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency and computational costs of large trained networks
Avoiding extensive retraining after structured pruning
Maintaining model performance while compressing network size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variance-Based Pruning for structured compression
One-shot pruning with minimal finetuning
Preserves performance via mean activation integration
🔎 Similar Papers
No similar papers found.