🤖 AI Summary
The impact of quantization on multilingual machine translation—particularly for low-resource languages—remains poorly understood. Method: We systematically evaluate four post-training quantization methods—AWQ, BitsAndBytes, GGUF, and AutoRound—across 55 languages at 4-bit and 2-bit precision. Contribution/Results: Our study is the first to reveal that quantization error intensifies with decreasing language resource availability and varies across language families: 2-bit quantization severely degrades translation quality for low-resource languages, whereas 4-bit quantization preserves performance for high-resource languages. GGUF demonstrates superior robustness under 2-bit quantization. Furthermore, we validate that language-matched calibration effectively mitigates low-bit degradation. These findings provide empirical evidence and practical guidance for lightweight deployment of multilingual LLMs in resource-constrained multilingual settings.
📝 Abstract
Quantization is essential for deploying large language models (LLMs) on resource-constrained hardware, but its implications for multilingual tasks remain underexplored. We conduct the first large-scale evaluation of post-training quantization (PTQ) on machine translation across 55 languages using five LLMs ranging from 1.7B to 70B parameters. Our analysis reveals that while 4-bit quantization often preserves translation quality for high-resource languages and large models, significant degradation occurs for low-resource and typologically diverse languages, particularly in 2-bit settings. We compare four quantization techniques (AWQ, BitsAndBytes, GGUF, and AutoRound), showing that algorithm choice and model size jointly determine robustness. GGUF variants provide the most consistent performance, even at 2-bit precision. Additionally, we quantify the interactions between quantization, decoding hyperparameters, and calibration languages, finding that language-matched calibration offers benefits primarily in low-bit scenarios. Our findings offer actionable insights for deploying multilingual LLMs for machine translation under quantization constraints, especially in low-resource settings.