🤖 AI Summary
To address the severe performance degradation and irreversible information loss caused by pruning in large language model (LLM) compression, this paper proposes a novel three-stage paradigm: “regularization-first—structured pruning—sparsity-aware fine-tuning.” First, data-driven L1/L2 regularization is introduced to steer critical information toward modules slated for retention. Second, structured pruning is performed to ensure parameter interpretability and hardware efficiency. Third, weight importance redistribution coupled with sparsity-aware fine-tuning compensates for pruning-induced losses. Evaluated under extreme sparsity (e.g., 90% pruning ratio), the method significantly outperforms state-of-the-art approaches: inference latency decreases by 42%, throughput increases by 2.3×, and perplexity (PPL) degrades by less than 5%. This work is the first to achieve controllable information migration and robust performance preservation at high sparsity levels.
📝 Abstract
Large language models (LLMs) have achieved significant progress across various domains, but their increasing scale results in high computational and memory costs. Recent studies have revealed that LLMs exhibit sparsity, providing the potential to reduce model size through pruning techniques. However, existing pruning methods typically follow a prune-then-finetune paradigm. Since the pruned components still contain valuable information, their direct removal often leads to irreversible performance degradation, imposing a substantial computational burden to recover performance during finetuning. In this paper, we propose a novel paradigm that first applies regularization, then prunes, and finally finetunes. Based on this paradigm, we introduce DReSS, a simple and effective Data-driven Regularized Structured Streamlining method for LLMs. By leveraging a small amount of data to regularize the components to be pruned, DReSS explicitly transfers the important information to the remaining parts of the model in advance. Compared to direct pruning, this can reduce the information loss caused by parameter removal, thereby enhancing its language modeling capabilities. Experimental results demonstrate that DReSS significantly outperforms existing pruning methods even under extreme pruning ratios, significantly reducing latency and increasing throughput.