π€ AI Summary
Conventional lattice-based vector quantizers suffer from error distributions constrained by the geometry of fundamental Voronoi cells, limiting flexibility and performance in high-dimensional settings.
Method: We propose a general stochastic vector quantizer based on rejection sampling that explicitly reshapes the quantization error distribution to be uniform within a Euclidean ballβachieving, for the first time, strict input-independent uniformity over the ball.
Contributions/Results: Theoretical analysis shows that, at fixed entropy, our quantizer achieves lower maximum error than optimal lattice quantizers in dimensions 5β48, and lower mean squared error in 35β48 dimensions. Furthermore, for additive noise channels satisfying mild conditions (e.g., AWGN), we characterize the high-SNR channel capacity limit under single-channel simulation, with an error bound of Β±1.45 bits. This work overcomes the geometric bottleneck inherent in lattice quantization and establishes a new paradigm for high-resolution quantization and channel simulation.
π Abstract
We construct a randomized vector quantizer which has a smaller maximum error compared to all known lattice quantizers with the same entropy for dimensions 5, 6,β¦, 48, and also has a smaller mean squared error compared to known lattice quantizers with the same entropy for dimensions 35,β¦, 48, in the high resolution limit. Moreover, our randomized quantizer has a desirable property that the quantization error is always uniform over the ball and independent of the input. Our construction is based on applying rejection sampling on universal quantization, which allows us to shape the error distribution to be any continuous distribution, not only uniform distributions over basic cells of a lattice as in conventional dithered quantization. We also characterize the high SNR limit of one-shot channel simulation for any additive noise channel under a mild assumption (e.g., the AWGN channel), up to an additive constant of 1.45 bits.