Semantic Retention and Extreme Compression in LLMs: Can We Have Both?

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of semantic fidelity in large language models (LLMs) under extreme compression, this paper proposes a semantics-preserving compression framework that synergistically integrates pruning and quantization. Methodologically, it combines structured pruning, INT4/INT2 quantization, semantic-sensitive layer adaptation, and knowledge distillation–based fine-tuning. A key contribution is the novel metric “Semantic-preserving Compression Ratio” (SrCr), which formally models the trade-off between compression ratio and semantic fidelity—enabling principled, joint configuration optimization. Experiments demonstrate that, at equal theoretical compression ratios, the proposed joint scheme improves average downstream task performance by 20% over pure quantization baselines, while maintaining over 98% original semantic consistency.

Technology Category

Application Category

📝 Abstract
The exponential growth in Large Language Model (LLM) deployment has intensified the need for efficient model compression techniques to reduce computational and memory costs. While pruning and quantization have shown promise, their combined potential remains largely unexplored. In this paper, we examine joint compression and how strategically combining pruning and quantization could yield superior performance-to-compression ratios compared to single-method approaches. Recognizing the challenges in accurately assessing LLM performance, we address key limitations of previous evaluation frameworks and introduce the Semantic Retention Compression Rate (SrCr), a novel metric that quantifies the trade-off between model compression and semantic preservation, facilitating the optimization of pruning-quantization configurations. Experiments demonstrate that our recommended combination achieves, on average, a 20% performance increase compared to an equivalent quantization-only model at the same theoretical compression rate.
Problem

Research questions and friction points this paper is trying to address.

Exploring joint compression via pruning and quantization in LLMs
Introducing Semantic Retention Compression Rate (SrCr) metric
Optimizing performance-to-compression ratios in model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines pruning and quantization for better compression
Introduces Semantic Retention Compression Rate (SrCr) metric
Achieves 20% performance increase over quantization-only
🔎 Similar Papers
No similar papers found.