Efficient Low-Memory Fast Stack Decoding with Variance Polarization for PAC Codes

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency and computational complexity caused by path explosion in stack decoding of Polarized Adaptive Codes (PAC), this paper proposes a low-memory, high-performance parallel stack decoding algorithm. The method introduces two key innovations: (1) a variance-polarization–based path pruning strategy that discards erroneous paths with exponentially decaying probability; and (2) an integrated stack management mechanism combining rate-0/rate-1 node identification, subtree pruning, approximate variance polarization estimation for BI-AWGN channels, and path metric mean alignment. Evaluated on PAC(128,64), the algorithm reduces the average number of active paths by up to 70%, significantly lowering decoding latency and computational complexity while preserving frame error rate (FER) performance. This approach achieves an optimal trade-off between efficiency and reliability, establishing a new paradigm for practical decoding of short-length PAC codes.

Technology Category

Application Category

📝 Abstract
Polarization-adjusted convolutional (PAC) codes have recently emerged as a promising class of error-correcting codes, achieving near-capacity performance particularly in the short block-length regime. In this paper, we propose an enhanced stack decoding algorithm for PAC codes that significantly improves parallelization by exploiting specialized bit nodes, such as rate-0 and rate-1 nodes. For a rate-1 node with $N_0$ leaf nodes in its corresponding subtree, conventional stack decoding must either explore all $2^{N_0}$ paths, or, same as in fast list decoding, restrict attention to a constant number of candidate paths. In contrast, our approach introduces a pruning technique that discards wrong paths with a probability exponentially approaching zero, retaining only those whose path metrics remain close to their expected mean values. Furthermore, we propose a novel approximation method for estimating variance polarization under the binary-input additive white Gaussian noise (BI-AWGN) channel. Leveraging these approximations, we develop an efficient stack-pruning strategy that selectively preserves decoding paths whose bit-metric values align with their expected means. This targeted pruning substantially reduces the number of active paths in the stack, thereby decreasing both decoding latency and computational complexity. Numerical results demonstrate that for a PAC(128,64) code, our method achieves up to a 70% reduction in the average number of paths without degrading error-correction performance.
Problem

Research questions and friction points this paper is trying to address.

Reducing decoding latency and complexity for PAC codes
Improving stack decoding efficiency via variance polarization
Pruning wrong paths while maintaining error-correction performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pruning technique discards wrong paths exponentially
Variance polarization approximation for BI-AWGN channel
Selective path preservation reduces active paths
🔎 Similar Papers
No similar papers found.
M
Mohsen Moradi
Department of Electrical & Computer Engineering, Northeastern University, Boston MA-02115, USA
Hessam Mahdavifar
Hessam Mahdavifar
Northeastern University
Coding TheoryWireless CommunicationsSecurity and PrivacyMachine Learning