🤖 AI Summary
To address high memory consumption and latency in multi-mode Chain-of-Thought (CoT) inference—specifically *slow_think*, *auto_think*, and *no_think*—on openPangu-Embedded-1B/7B models deployed on the Atlas A2 Ascend NPU, this work proposes the first Ascend-platform-native unified INT8/W4A8 post-training quantization framework. Our method jointly optimizes computation and storage by integrating CoT-mode-aware calibration, CANN operator customization, and weight-activation quantization co-design, while preserving inference fidelity. Experimental results demonstrate that INT8 quantization maintains over 90% FP16 accuracy on HumanEval and MBPP benchmarks, with a 1.5× improvement in prefill throughput; W4A8 quantization significantly reduces GPU memory footprint. This is the first work to validate efficient deployment of multi-mode CoT inference on domestic NPUs, establishing a practical pathway for resource-constrained large language model execution on Ascend hardware.
📝 Abstract
Huawei's openPangu-Embedded-1B and openPangu-Embedded-7B, variants of the openPangu large language model, integrate three distinct Chain-of-Thought (CoT) reasoning paradigms, namely slow_think, auto_think, and no_think. While these CoT modes enhance reasoning capabilities, their generation of extended reasoning traces introduces substantial memory and latency overheads, posing challenges for practical deployment on Ascend NPUs. This paper addresses these computational constraints by leveraging low-bit quantization, which transforms FP16 computations into more efficient integer arithmetic. We introduce a unified low-bit inference framework, supporting INT8 (W8A8) and W4A8 quantization, specifically optimized for openPangu-Embedded models on the Atlas A2. Our comprehensive evaluation, conducted across all three CoT modes on code generation benchmarks (HumanEval and MBPP), demonstrates the efficacy of this approach. INT8 quantization consistently preserves over 90% of the FP16 baseline accuracy and achieves a 1.5x prefill speedup on the Atlas A2. Furthermore, W4A8 quantization significantly reduces memory consumption, albeit with a moderate trade-off in accuracy. These findings collectively indicate that low-bit quantization effectively facilitates efficient CoT reasoning on Ascend NPUs, maintaining high model fidelity.