🤖 AI Summary
Large language models (LLMs) face deployment challenges due to their massive parameter counts and high computational overhead. Conventional low-rank compression methods—e.g., singular value decomposition (SVD)—optimize only for matrix reconstruction error, neglecting functional information loss, which leads to substantial performance degradation. To address this, we propose CALR, a layer-wise low-rank compression framework tailored for LLMs: its primary path employs SVD for efficient parameter reduction, while a novel, learnable, parallel low-rank correction module explicitly models functional information loss as a trainable signal and recovers functional residuals via end-to-end optimization. CALR is architecture-agnostic and achieves 26.93%–51.77% parameter compression on multiple mainstream small-scale LLMs, retaining 59.45%–90.42% of original task performance—significantly outperforming baselines including LaCo, ShortGPT, and LoSparse.
📝 Abstract
Large Language Models (LLMs) present significant deployment challenges due to their immense size and computational requirements. Model compression techniques are essential for making these models practical for resource-constrained environments. A prominent compression strategy is low-rank factorization via Singular Value Decomposition (SVD) to reduce model parameters by approximating weight matrices. However, standard SVD focuses on minimizing matrix reconstruction error, often leading to a substantial loss of the model's functional performance. This performance degradation occurs because existing methods do not adequately correct for the functional information lost during compression. To address this gap, we introduce Corrective Adaptive Low-Rank Decomposition (CALR), a two-component compression approach. CALR combines a primary path of SVD-compressed layers with a parallel, learnable, low-rank corrective module that is explicitly trained to recover the functional residual error. Our experimental evaluation on SmolLM2-135M, Qwen3-0.6B, and Llama-3.2-1B, demonstrates that CALR can reduce parameter counts by 26.93% to 51.77% while retaining 59.45% to 90.42% of the original model's performance, consistently outperforming LaCo, ShortGPT, and LoSparse. CALR's success shows that treating functional information loss as a learnable signal is a highly effective compression paradigm. This approach enables the creation of significantly smaller, more efficient LLMs, advancing their accessibility and practical deployment in real-world applications.