Rejection-Sampled Universal Quantization for Smaller Quantization Errors

πŸ“… 2024-02-05
πŸ›οΈ International Symposium on Information Theory
πŸ“ˆ Citations: 5
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Conventional lattice-based vector quantizers suffer from error distributions constrained by the geometry of fundamental Voronoi cells, limiting flexibility and performance in high-dimensional settings. Method: We propose a general stochastic vector quantizer based on rejection sampling that explicitly reshapes the quantization error distribution to be uniform within a Euclidean ballβ€”achieving, for the first time, strict input-independent uniformity over the ball. Contributions/Results: Theoretical analysis shows that, at fixed entropy, our quantizer achieves lower maximum error than optimal lattice quantizers in dimensions 5–48, and lower mean squared error in 35–48 dimensions. Furthermore, for additive noise channels satisfying mild conditions (e.g., AWGN), we characterize the high-SNR channel capacity limit under single-channel simulation, with an error bound of Β±1.45 bits. This work overcomes the geometric bottleneck inherent in lattice quantization and establishes a new paradigm for high-resolution quantization and channel simulation.

Technology Category

Application Category

πŸ“ Abstract
We construct a randomized vector quantizer which has a smaller maximum error compared to all known lattice quantizers with the same entropy for dimensions 5, 6,…, 48, and also has a smaller mean squared error compared to known lattice quantizers with the same entropy for dimensions 35,…, 48, in the high resolution limit. Moreover, our randomized quantizer has a desirable property that the quantization error is always uniform over the ball and independent of the input. Our construction is based on applying rejection sampling on universal quantization, which allows us to shape the error distribution to be any continuous distribution, not only uniform distributions over basic cells of a lattice as in conventional dithered quantization. We also characterize the high SNR limit of one-shot channel simulation for any additive noise channel under a mild assumption (e.g., the AWGN channel), up to an additive constant of 1.45 bits.
Problem

Research questions and friction points this paper is trying to address.

Constructing a randomized vector quantizer with smaller maximum error
Achieving uniform and input-independent quantization error distribution
Characterizing high SNR limit for one-shot channel simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rejection sampling shapes universal quantization error distribution
Randomized quantizer achieves smaller maximum and mean squared errors
Method enables any continuous distribution for quantization error
πŸ”Ž Similar Papers
No similar papers found.
C
Chih Wei Ling
Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, China
Cheuk Ting Li
Cheuk Ting Li
Assistant Professor, Dept of Information Engineering, CUHK
Information Theory