CALR: Corrective Adaptive Low-Rank Decomposition for Efficient Large Language Model Layer Compression

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face deployment challenges due to their massive parameter counts and high computational overhead. Conventional low-rank compression methods—e.g., singular value decomposition (SVD)—optimize only for matrix reconstruction error, neglecting functional information loss, which leads to substantial performance degradation. To address this, we propose CALR, a layer-wise low-rank compression framework tailored for LLMs: its primary path employs SVD for efficient parameter reduction, while a novel, learnable, parallel low-rank correction module explicitly models functional information loss as a trainable signal and recovers functional residuals via end-to-end optimization. CALR is architecture-agnostic and achieves 26.93%–51.77% parameter compression on multiple mainstream small-scale LLMs, retaining 59.45%–90.42% of original task performance—significantly outperforming baselines including LaCo, ShortGPT, and LoSparse.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) present significant deployment challenges due to their immense size and computational requirements. Model compression techniques are essential for making these models practical for resource-constrained environments. A prominent compression strategy is low-rank factorization via Singular Value Decomposition (SVD) to reduce model parameters by approximating weight matrices. However, standard SVD focuses on minimizing matrix reconstruction error, often leading to a substantial loss of the model's functional performance. This performance degradation occurs because existing methods do not adequately correct for the functional information lost during compression. To address this gap, we introduce Corrective Adaptive Low-Rank Decomposition (CALR), a two-component compression approach. CALR combines a primary path of SVD-compressed layers with a parallel, learnable, low-rank corrective module that is explicitly trained to recover the functional residual error. Our experimental evaluation on SmolLM2-135M, Qwen3-0.6B, and Llama-3.2-1B, demonstrates that CALR can reduce parameter counts by 26.93% to 51.77% while retaining 59.45% to 90.42% of the original model's performance, consistently outperforming LaCo, ShortGPT, and LoSparse. CALR's success shows that treating functional information loss as a learnable signal is a highly effective compression paradigm. This approach enables the creation of significantly smaller, more efficient LLMs, advancing their accessibility and practical deployment in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Minimizes functional performance loss in LLM compression
Corrects information loss from standard SVD decomposition
Enables efficient deployment of large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

SVD-compressed layers with corrective module
Learnable low-rank module for error recovery
Treating functional loss as learnable signal
🔎 Similar Papers
No similar papers found.
M
Muchammad Daniyal Kautsar
Department of Electrical and Information Engineering, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
A
Afra Majida Hariono
Department of Electrical and Information Engineering, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
W
Widyawan
Department of Electrical and Information Engineering, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
Syukron Abu Ishaq Alfarozi
Syukron Abu Ishaq Alfarozi
Universitas Gadjah Mada
intelligence systemmachine learningcomputer visionnatural language processing
K
Kuntpong Wararatpanya
School of Information Technology, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand