PTQTP: Post-Training Quantization to Trit-Planes for Large Language Models

πŸ“… 2025-09-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the trade-off between computational efficiency and representational capacity in post-training quantization (PTQ) of large language models (LLMs) at ultra-low bit-widths, this paper introduces PTQTPβ€”the first PTQ framework supporting structured ternary ({βˆ’1, 0, 1}) weight quantization. Its core innovation is the trit-plane decomposition, ensuring global weight consistency, model-agnostic deployment, and purely additive (multiplication-free) inference using only 2Γ—1.58 bits per weight. PTQTP requires no mixed-precision schemes, compensation modules, or retraining, enabling full quantization within one hour. Leveraging a theory-driven progressive approximation algorithm and unified ternary arithmetic design, PTQTP achieves an 82.4% retention rate on mathematical reasoning tasks across LLaMA3.x and Qwen3 (0.6B–70B), matching or surpassing the performance of 1.58-bit quantization-aware training.

Technology Category

Application Category

πŸ“ Abstract
Post-training quantization (PTQ) of large language models (LLMs) to extremely low bit-widths remains challenging due to the fundamental trade-off between computational efficiency and model expressiveness. While existing ultra-low-bit PTQ methods rely on binary approximations or complex compensation mechanisms, they suffer from either limited representational capacity or computational overhead that undermines their efficiency gains. We introduce PTQ to Trit-Planes (PTQTP), the first ternary-weight PTQ framework that decomposes weight matrices into structured ternary {-1, 0, 1} trit-planes using 2x1.58-bit representation. PTQTP achieves multiplication-free inference, identical to 1-bit quantization, while maintaining superior expressiveness through its novel structured decomposition. Our approach provides: (1) a theoretically grounded progressive approximation algorithm ensuring global weight consistency; (2) model-agnostic deployment across diverse modern LLMs without architectural modifications; and (3) uniform ternary operations that eliminate the need for mixed-precision or compensation schemes. Comprehensive experiments across LLaMA3.x and Qwen3 model families (0.6B-70B parameters) demonstrate that PTQTP significantly outperforms existing low-bit PTQ methods, achieving 82.4% mathematical reasoning retention versus 0% for competing approaches. PTQTP approaches and sometimes surpasses 1.58-bit quantization-aware training performance while requiring only single-hour quantization compared to 10-14 GPU days for training-based methods. These results establish PTQTP as a practical solution for efficient LLM deployment in resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Quantizing large language models to extremely low bit-widths while balancing efficiency and expressiveness
Overcoming limited representational capacity and computational overhead in existing ultra-low-bit PTQ methods
Achieving multiplication-free inference like 1-bit quantization while maintaining superior model expressiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ternary-weight PTQ using structured trit-planes decomposition
Progressive approximation algorithm ensuring global weight consistency
Model-agnostic deployment with uniform ternary operations
πŸ”Ž Similar Papers
No similar papers found.