ELDeR: Getting Efficient LLMs through Data-Driven Regularized Layer-wise Pruning

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional pruning of large language models (LLMs) incurs irreversible information loss and necessitates costly recovery fine-tuning (RFT) to restore performance. Method: This paper proposes a “regularization-first” pruning paradigm: first, applying input-output difference L1/L2 regularization on low-importance Transformer layers to encourage information migration toward high-importance layers; second, performing data-driven, importance-aware structured layer pruning. Contribution/Results: Unlike static pruning, this approach prevents catastrophic performance degradation and preserves language modeling capability without RFT. Evaluated on multiple benchmarks, it significantly outperforms state-of-the-art layer-pruning methods—achieving notable inference speedup while drastically reducing fine-tuning computational overhead.

Technology Category

Application Category

📝 Abstract
The deployment of Large language models (LLMs) in many fields is largely hindered by their high computational and memory costs. Recent studies suggest that LLMs exhibit sparsity, which can be used for pruning. Previous pruning methods typically follow a prune-then-finetune paradigm. Since the pruned parts still contain valuable information, statically removing them without updating the remaining parameters often results in irreversible performance degradation, requiring costly recovery fine-tuning (RFT) to maintain performance. To address this, we propose a novel paradigm: first apply regularization, then prune. Based on this paradigm, we propose ELDeR: Getting Efficient LLMs through Data-Driven Regularized Layer-wise Pruning. We multiply the output of each transformer layer by an initial weight, then we iteratively learn the weights of each transformer layer by using a small amount of data in a simple way. After that, we apply regularization to the difference between the output and input of the layers with smaller weights, forcing the information to be transferred to the remaining layers. Compared with direct pruning, ELDeR reduces the information loss caused by direct parameter removal, thus better preserving the model's language modeling ability. Experimental results show that ELDeR achieves superior performance compared with powerful layer-wise structured pruning methods, while greatly reducing RFT computational costs. Since ELDeR is a layer-wise pruning method, its end-to-end acceleration effect is obvious, making it a promising technique for efficient LLMs.
Problem

Research questions and friction points this paper is trying to address.

Reduce computational and memory costs of LLMs
Minimize performance degradation from pruning
Achieve efficient pruning with less recovery fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven regularized layer-wise pruning
Iterative weight learning for transformer layers
Reduces information loss and RFT costs
M
Mingkuan Feng
Tsinghua University
J
Jinyang Wu
Tsinghua University
S
Siyuan Liu
Peking University
S
Shuai Zhang
Tsinghua University
H
Hongjian Fang
Tsinghua University
R
Ruihan Jin
Tsinghua University
Feihu Che
Feihu Che
Unknown affiliation
reasoninference
P
Pengpeng Shao
Tsinghua University
Zhengqi Wen
Zhengqi Wen
Tshinghua University
LLM
J
Jianhua Tao
Tsinghua University