🤖 AI Summary
This work demonstrates that large language models exhibit substantial parameter redundancy, with performance depending more critically on the relative rank of weights than their absolute values. The authors propose an efficient, retraining-free compression method that quantizes weights into 16–64 shared cluster centroids via K-means clustering, followed by fine-tuning of cluster centers and an affine correction (w′ = aw + b) to recover performance. To theoretically ground this approach, they introduce a rank-preserving perturbation analysis framework, showing that perturbations disrupting weight order cause severe performance degradation, whereas rank-preserving perturbations have minimal impact. Evaluated on Llama-3.1-8B-Instruct and SmolLM2-135M, the method achieves high compression rates while preserving accuracy, with cluster-center fine-tuning recovering 30–40% of the initial performance loss.
📝 Abstract
Large language models (LLMs) contain billions of parameters, yet many exact values are not essential. We show that what matters most is the relative rank of weights-whether one connection is stronger or weaker than another-rather than precise magnitudes. To reduce the number of unique weight values, we apply weight clustering to pretrained models, replacing every weight matrix with K shared values from K-means. For Llama 3.1-8B-Instruct and SmolLM2-135M, reducing each matrix to only 16-64 distinct values preserves strong accuracy without retraining, providing a simple, training-free method to compress LLMs on disk. Optionally fine-tuning only the cluster means (centroids) recovers 30-40 percent of the remaining accuracy gap at minimal cost. We then systematically randomize cluster means while keeping assignments fixed. Scrambling the relative ranks of the clusters degrades quality sharply-perplexity can increase by orders of magnitude-even when global statistics such as mean and variance are preserved. In contrast, rank-preserving randomizations cause almost no loss at mid and late layers. On the other hand, when many layers are perturbed simultaneously, progressive layer-by-layer replacement reveals that scale drift-not rank distortion-is the dominant collapse mechanism; however, an affine correction w' = aw + b with a > 0 (which preserves both rank order and overall weight distribution) can substantially delay this drift. This rank-based perspective offers a new lens on model compression and robustness.