🤖 AI Summary
This work addresses low-rank matrix recovery from ultra-low-bit (e.g., 1–3 bit) quantized measurements, focusing on the theoretical performance of nuclear norm minimization under memoryless scalar quantization and random dithering. Unlike mainstream approaches requiring likelihood-based regularization, we establish, for the first time, rigorous error bounds for dithered coarse-quantized matrix completion. We systematically distinguish two practical settings: when dither values are exactly known versus when only their statistical distribution is available. Our analysis quantifies how bit depth, dither distribution, pre-quantization noise, and sign flips—measured by Hamming distance—affect reconstruction error. We prove that nuclear norm minimization achieves robust recovery without auxiliary regularizers, revealing fundamental trade-offs among quantization resolution, dither statistics, and reconstruction accuracy.
📝 Abstract
We delve into the impact of memoryless scalar quantization on matrix completion. We broaden our theoretical discussion to encompass the coarse quantization scenario with a dithering scheme, where the only available information for low-rank matrix recovery is few-bit low-resolution data. Our primary motivation for this research is to evaluate the recovery performance of nuclear norm minimization in handling quantized matrix problems without the use of any regularization terms such as those stemming from maximum likelihood estimation. We furnish theoretical guarantees for both scenarios: when access to dithers is available during the reconstruction process, and when we have access solely to the statistical properties of the dithers. Additionally, we conduct a comprehensive analysis of the effects of sign flips and prequantization noise on the recovery performance, particularly when the impact of sign flips is quantified using the well-known Hamming distance in the upper bound of recovery error.