🤖 AI Summary
The impact of quantization on internal representations of large language models (LLMs) remains poorly understood, hindering their trustworthy deployment in resource-constrained settings.
Method: We systematically investigate the effects of 4-bit and 8-bit quantization on neuron activations, contribution distributions, calibration performance, and redundancy across multiple LLM families (e.g., Llama, Qwen), employing neuron significance analysis, dead neuron detection, and attribution-based interpretability methods.
Contribution/Results: Quantization induces no significant performance degradation or calibration shift; the proportion of dead neurons remains stable, and quantized LLMs retain higher neuron significance than smaller models. While sensitivity to quantization varies across architectures, overall robustness is strong. This work provides the first empirical characterization of representational robustness under quantization, revealing mechanistic insights into how LLMs preserve functional integrity post-compression. Our findings establish theoretical foundations and practical guidelines for reliable lightweight LLM compression and deployment.
📝 Abstract
Quantization offers a practical solution to deploy LLMs in resource-constraint environments. However, its impact on internal representations remains understudied, raising questions about the reliability of quantized models. In this study, we employ a range of interpretability techniques to investigate how quantization affects model and neuron behavior. We analyze multiple LLMs under 4-bit and 8-bit quantization. Our findings reveal that the impact of quantization on model calibration is generally minor. Analysis of neuron activations indicates that the number of dead neurons, i.e., those with activation values close to 0 across the dataset, remains consistent regardless of quantization. In terms of neuron contribution to predictions, we observe that smaller full precision models exhibit fewer salient neurons, whereas larger models tend to have more, with the exception of Llama-2-7B. The effect of quantization on neuron redundancy varies across models. Overall, our findings suggest that effect of quantization may vary by model and tasks, however, we did not observe any drastic change which may discourage the use of quantization as a reliable model compression technique.