DReSS: Data-driven Regularized Structured Streamlining for Large Language Models

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe performance degradation and irreversible information loss caused by pruning in large language model (LLM) compression, this paper proposes a novel three-stage paradigm: “regularization-first—structured pruning—sparsity-aware fine-tuning.” First, data-driven L1/L2 regularization is introduced to steer critical information toward modules slated for retention. Second, structured pruning is performed to ensure parameter interpretability and hardware efficiency. Third, weight importance redistribution coupled with sparsity-aware fine-tuning compensates for pruning-induced losses. Evaluated under extreme sparsity (e.g., 90% pruning ratio), the method significantly outperforms state-of-the-art approaches: inference latency decreases by 42%, throughput increases by 2.3×, and perplexity (PPL) degrades by less than 5%. This work is the first to achieve controllable information migration and robust performance preservation at high sparsity levels.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved significant progress across various domains, but their increasing scale results in high computational and memory costs. Recent studies have revealed that LLMs exhibit sparsity, providing the potential to reduce model size through pruning techniques. However, existing pruning methods typically follow a prune-then-finetune paradigm. Since the pruned components still contain valuable information, their direct removal often leads to irreversible performance degradation, imposing a substantial computational burden to recover performance during finetuning. In this paper, we propose a novel paradigm that first applies regularization, then prunes, and finally finetunes. Based on this paradigm, we introduce DReSS, a simple and effective Data-driven Regularized Structured Streamlining method for LLMs. By leveraging a small amount of data to regularize the components to be pruned, DReSS explicitly transfers the important information to the remaining parts of the model in advance. Compared to direct pruning, this can reduce the information loss caused by parameter removal, thereby enhancing its language modeling capabilities. Experimental results demonstrate that DReSS significantly outperforms existing pruning methods even under extreme pruning ratios, significantly reducing latency and increasing throughput.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Resource Efficiency
Performance Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DReSS
Ultra-Large Language Models
Pruning and Fine-tuning
🔎 Similar Papers
No similar papers found.
M
Mingkuan Feng
Department of Automation, Tsinghua University, Beijing, China
J
Jinyang Wu
Department of Automation, Tsinghua University, Beijing, China
S
Shuai Zhang
Department of Automation, Tsinghua University, Beijing, China
P
Pengpeng Shao
Department of Automation, Tsinghua University, Beijing, China
R
Ruihan Jin
Department of Automation, Tsinghua University, Beijing, China
Zhengqi Wen
Zhengqi Wen
Tshinghua University
LLM
J
Jianhua Tao
Department of Automation, Tsinghua University, Beijing, China
Feihu Che
Feihu Che
Unknown affiliation
reasoninference