Is Quantization a Deal-breaker? Empirical Insights from Large Code Models

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the impact of model quantization on the code-generation quality of large language models for programming (e.g., CodeLlama, DeepSeekCoder), focusing on non-functional dimensions—reliability, maintainability, and security—beyond conventional functional correctness. We apply activation-aware weight quantization (AWQ) to achieve 4-bit quantization and employ static analysis tools to quantify developer-relevant metrics, including cyclomatic complexity, cognitive complexity, and lines of code. Our empirical evaluation is the first to demonstrate that 4-bit quantization significantly reduces computational overhead and memory footprint without statistically significant degradation in functional correctness or key code-quality attributes. These findings validate the efficacy and practicality of quantization for code generation tasks and provide a robust foundation for deploying large code models in resource-constrained environments.

Technology Category

Application Category

📝 Abstract
The growing scale of large language models (LLMs) not only demands extensive computational resources but also raises environmental concerns due to their increasing carbon footprint. Model quantization emerges as an effective approach that can reduce the resource demands of LLMs by decreasing parameter precision without substantially affecting performance (e.g., 16 bit to 4 bit). While recent studies have established quantization as a promising approach for optimizing large code models (LCMs), a specialized subset of LLMs tailored for automated software engineering, their findings offer only limited insights into its practical implications. Specifically, current investigations focus only on the functional correctness of the code generated by quantized models, neglecting how quantization impacts critical aspects of code quality such as reliability, maintainability, and security. To bridge this gap, our study investigates the effects of quantization on the qualitative aspects of automatically generated code. We apply Activation-aware Weight Quantization (AWQ) to two widely used code models, CodeLlama and DeepSeekCoder, to generate Java and Python code. Using state-of-the-art static analysis tools, we evaluate software quality metrics and static features including cyclomatic complexity, cognitive complexity, and lines of code. Our findings reveal that quantization is a robust technique that not only preserves functional correctness, but also retains key qualitative code attributes sought after by developers, such as maintainability and structural simplicity.
Problem

Research questions and friction points this paper is trying to address.

Investigates quantization's impact on code quality metrics
Evaluates reliability, maintainability, and security of quantized LCMs
Assesses AWQ effects on CodeLlama and DeepSeekCoder outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Activation-aware Weight Quantization (AWQ)
Evaluates code quality metrics post-quantization
Focuses on maintainability and structural simplicity
🔎 Similar Papers
No similar papers found.
S
Saima Afrin
Department of Computer Science, William & Mary
B
Bowen Xu
Department of Computer Science, North Carolina State University
Antonio Mastropaolo
Antonio Mastropaolo
William & Mary
Software engineeringsoftware testingdeep learning