🤖 AI Summary
To address the challenges of excessive parameter count and severe accuracy degradation under compression in large language model (LLM) inference, this paper proposes TARDIS—a novel structural optimization method inspired by compiler constant folding. TARDIS introduces input-aware piecewise linear approximations of nonlinear activation functions (e.g., GELU) in feed-forward network (FFN) layers, coupled with online anomaly detection and dynamic fallback to preserve accuracy in critical regions. By integrating FFN parameter reparameterization and deep backend support for vLLM and Hugging Face (HF), TARDIS achieves 80% parameter reduction in FFN layers. It delivers end-to-end inference speedups of 1.6× (vLLM) and 1.4× (HF) on a 7B model, with only a 10.9% accuracy drop—outperforming state-of-the-art methods Wanda and RIA by up to 65% in accuracy preservation under comparable compression.
📝 Abstract
Large language models (LLMs) demonstrate remarkable capabilities but face deployment challenges due to their massive parameter counts. While existing compression techniques like pruning can reduce model size, it leads to significant accuracy degradation under high compression ratios. We present a novel perspective inspired by constant folding in compiler optimization. Our approach enables parameter reduction by treating activation functions in LLMs as linear functions. However, recent LLMs use complex non-linear activations like GELU that prevent direct application of this technique. We propose TARDIS, which enables optimization of LLMs with non-linear activations by partially approximating them with linear functions in frequently occurring input ranges. For outlier inputs, TARDIS employs an online predictor to dynamically fall back to original computations. Our experiments demonstrate that TARDIS achieves 80% parameter reduction in feed-forward networks, while significantly outperforming state-of-the-art pruning methods Wanda and RIA with up to 65% higher accuracy. In practical deployments for a 7B model, TARDIS achieves 1.6x end-to-end inference speedup when integrated with the vLLM serving system, and 1.4x speedup with the widely adopted HuggingFace implementation, while incurring only a 10.9% accuracy trade-off.