Accelerating Large Language Models through Partially Linear Feed-Forward Network

📅 2025-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of excessive parameter count and severe accuracy degradation under compression in large language model (LLM) inference, this paper proposes TARDIS—a novel structural optimization method inspired by compiler constant folding. TARDIS introduces input-aware piecewise linear approximations of nonlinear activation functions (e.g., GELU) in feed-forward network (FFN) layers, coupled with online anomaly detection and dynamic fallback to preserve accuracy in critical regions. By integrating FFN parameter reparameterization and deep backend support for vLLM and Hugging Face (HF), TARDIS achieves 80% parameter reduction in FFN layers. It delivers end-to-end inference speedups of 1.6× (vLLM) and 1.4× (HF) on a 7B model, with only a 10.9% accuracy drop—outperforming state-of-the-art methods Wanda and RIA by up to 65% in accuracy preservation under comparable compression.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) demonstrate remarkable capabilities but face deployment challenges due to their massive parameter counts. While existing compression techniques like pruning can reduce model size, it leads to significant accuracy degradation under high compression ratios. We present a novel perspective inspired by constant folding in compiler optimization. Our approach enables parameter reduction by treating activation functions in LLMs as linear functions. However, recent LLMs use complex non-linear activations like GELU that prevent direct application of this technique. We propose TARDIS, which enables optimization of LLMs with non-linear activations by partially approximating them with linear functions in frequently occurring input ranges. For outlier inputs, TARDIS employs an online predictor to dynamically fall back to original computations. Our experiments demonstrate that TARDIS achieves 80% parameter reduction in feed-forward networks, while significantly outperforming state-of-the-art pruning methods Wanda and RIA with up to 65% higher accuracy. In practical deployments for a 7B model, TARDIS achieves 1.6x end-to-end inference speedup when integrated with the vLLM serving system, and 1.4x speedup with the widely adopted HuggingFace implementation, while incurring only a 10.9% accuracy trade-off.
Problem

Research questions and friction points this paper is trying to address.

Parameter Reduction
Efficiency Improvement
Complex Activation Functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

TARDIS
Linearization of Activation Functions
Parameter Reduction in LLMs
🔎 Similar Papers
No similar papers found.
G
Gansen Hu
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University
Zhaoguo Wang
Zhaoguo Wang
Shanghai Jiao Tong University
J
Jinglin Wei
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University
W
Wei Huang
H
Haibo Chen
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University