On the Optimality of Single-label and Multi-label Neural Network Decoders

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental limitation of single- and multi-layer neural network (SLNN/MLNN) decoders—namely, their inability to guarantee optimal performance and reliance on approximate, empirically trained solutions. We propose an analytical, training-free decoder with strictly binary weights, directly constructed from the linear block code’s parity-check or generator matrix as a deterministic combinational logic circuit. Crucially, this circuit is provably equivalent to maximum-likelihood (ML) decoding. We provide the first theoretical proof that both single- and multi-label architectures achieve strict ML performance. Empirical evaluation on short codes—including Hamming(7,4), Polar(16,8), and BCH(31,21)—confirms attainment of the theoretical minimum bit-error rate, with significantly lower computational complexity than state-of-the-art neural decoders. Our core contribution lies in eliminating parameter learning and mitigating the curse of dimensionality by establishing a verifiably optimal, interpretable mapping from codebook to binary circuit—yielding the first neuromorphic decoding paradigm for short codes that is simultaneously explainable, deterministic, and ML-optimal.

Technology Category

Application Category

📝 Abstract
We investigate the design of two neural network (NN) architectures recently proposed as decoders for forward error correction: the so-called single-label NN (SLNN) and multi-label NN (MLNN) decoders. These decoders have been reported to achieve near-optimal codeword- and bit-wise performance, respectively. Results in the literature show near-optimality for a variety of short codes. In this paper, we analytically prove that certain SLNN and MLNN architectures can, in fact, always realize optimal decoding, regardless of the code. These optimal architectures and their binary weights are shown to be defined by the codebook, i.e., no training or network optimization is required. Our proposed architectures are in fact not NNs, but a different way of implementing the maximum likelihood decoding rule. Optimal performance is numerically demonstrated for Hamming $(7,4)$, Polar $(16,8)$, and BCH $(31,21)$ codes. The results show that our optimal architectures are less complex than the SLNN and MLNN architectures proposed in the literature, which in fact only achieve near-optimal performance. Extension to longer codes is still hindered by the curse of dimensionality. Therefore, even though SLNN and MLNN can perform maximum likelihood decoding, such architectures cannot be used for medium and long codes.
Problem

Research questions and friction points this paper is trying to address.

Analyzing optimality of single-label and multi-label NN decoders
Proving optimal decoding for specific SLNN and MLNN architectures
Addressing complexity limitations for medium and long codes
Innovation

Methods, ideas, or system contributions that make the work stand out.

SLNN and MLNN achieve optimal decoding performance
Binary weights defined by codebook, no training needed
Optimal architectures simpler than near-optimal ones
🔎 Similar Papers
No similar papers found.
Y
Yunus Can Gultekin
Department of Electrical Engineering, Eindhoven University of Technology, Eind-hoven, the Netherlands
P
P'eter Scheepers
School of Electrical Engineering and Computer Science, Technical University of Berlin, Berlin, Germany
Yuncheng Yuan
Yuncheng Yuan
Eindhoven University of Technology
Signal Processing
Federico Corradi
Federico Corradi
Eindhoven University of Technology
NeuromorphicNeuromorphic engineeringSpiking Neural NetworksBio-signal processing
A
Alex Alvarado
Department of Electrical Engineering, Eindhoven University of Technology, Eind-hoven, the Netherlands