OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition

📅 2024-09-20
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the high retraining cost and severe accuracy degradation under high compression ratios in large model compression, this paper proposes a retraining-free, outlier-aware pruning method. Our approach is the first to leverage second-order statistics of input embeddings to guide weight decomposition, explicitly factorizing each weight matrix into a sparse component—capturing outliers—and a low-rank component—encoding dominant patterns—followed by post-hoc structured pruning for efficient compression. Unlike conventional pruning methods, ours breaks the traditional accuracy-compression trade-off. Evaluated on Llama-3, Phi-3, ViT, and DINOv2, it achieves up to 60% parameter reduction while significantly outperforming state-of-the-art methods in accuracy. Moreover, it delivers a 1.37× speedup in CPU inference latency.

Technology Category

Application Category

📝 Abstract
The recent paradigm shift to large-scale foundation models has brought about a new era for deep learning that, while has found great success in practice, has also been plagued by prohibitively expensive costs in terms of high memory consumption and compute. To mitigate these issues, there has been a concerted effort in post-hoc neural network pruning techniques that do not require costly retraining. Despite the considerable progress being made, existing methods often exhibit a steady drop in model performance as the compression increases. In this paper, we present a novel approach to compressing large transformers, coined OATS, that utilizes the second moment information in the input embeddings to decompose the model weights into a sum of sparse and low-rank matrices. Without any retraining, OATS achieves state-of-the-art performance when compressing models by up to $60%$ on large language models such as Llama-3 and Phi-3 and vision transformers such as ViT and DINOv2 while delivering up to $1.37 imes$ the CPU acceleration versus a model that was comparably pruned.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory and compute costs in large-scale models
Improves model compression without performance drop
Enhances CPU acceleration in pruned models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes second moment information for decomposition
Decomposes weights into sparse and low-rank matrices
Achieves state-of-the-art compression without retraining
🔎 Similar Papers
No similar papers found.
S
Stephen Zhang
University of Toronto
V
V. Papyan
University of Toronto