π€ AI Summary
To address the trade-off between computational efficiency and representational capacity in post-training quantization (PTQ) of large language models (LLMs) at ultra-low bit-widths, this paper introduces PTQTPβthe first PTQ framework supporting structured ternary ({β1, 0, 1}) weight quantization. Its core innovation is the trit-plane decomposition, ensuring global weight consistency, model-agnostic deployment, and purely additive (multiplication-free) inference using only 2Γ1.58 bits per weight. PTQTP requires no mixed-precision schemes, compensation modules, or retraining, enabling full quantization within one hour. Leveraging a theory-driven progressive approximation algorithm and unified ternary arithmetic design, PTQTP achieves an 82.4% retention rate on mathematical reasoning tasks across LLaMA3.x and Qwen3 (0.6Bβ70B), matching or surpassing the performance of 1.58-bit quantization-aware training.
π Abstract
Post-training quantization (PTQ) of large language models (LLMs) to extremely low bit-widths remains challenging due to the fundamental trade-off between computational efficiency and model expressiveness. While existing ultra-low-bit PTQ methods rely on binary approximations or complex compensation mechanisms, they suffer from either limited representational capacity or computational overhead that undermines their efficiency gains. We introduce PTQ to Trit-Planes (PTQTP), the first ternary-weight PTQ framework that decomposes weight matrices into structured ternary {-1, 0, 1} trit-planes using 2x1.58-bit representation. PTQTP achieves multiplication-free inference, identical to 1-bit quantization, while maintaining superior expressiveness through its novel structured decomposition. Our approach provides: (1) a theoretically grounded progressive approximation algorithm ensuring global weight consistency; (2) model-agnostic deployment across diverse modern LLMs without architectural modifications; and (3) uniform ternary operations that eliminate the need for mixed-precision or compensation schemes. Comprehensive experiments across LLaMA3.x and Qwen3 model families (0.6B-70B parameters) demonstrate that PTQTP significantly outperforms existing low-bit PTQ methods, achieving 82.4% mathematical reasoning retention versus 0% for competing approaches. PTQTP approaches and sometimes surpasses 1.58-bit quantization-aware training performance while requiring only single-hour quantization compared to 10-14 GPU days for training-based methods. These results establish PTQTP as a practical solution for efficient LLM deployment in resource-constrained environments.