Order-Optimal Sequential 1-Bit Mean Estimation in General Tail Regimes

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of efficiently estimating the mean of a distribution with bounded mean and finite $k$-th central moment ($k > 1$) under 1-bit communication constraints. The authors propose a sequential, adaptive 1-bit estimator based on randomized threshold queries, which uses only 1-bit feedback indicating whether each sample exceeds a dynamically chosen threshold. This method achieves order-optimal sample complexity for all $k > 1$: when $k \neq 2$, it matches the unquantized minimax lower bound up to a logarithmic localization cost; when $k = 2$, the incurred logarithmic factor is shown to be information-theoretically unavoidable. The study further demonstrates a significant efficiency gap between adaptive and non-adaptive strategies and provides practical variants that accommodate unknown scale parameters and communication budgets.
📝 Abstract
In this paper, we study the problem of mean estimation under strict 1-bit communication constraints. We propose a novel adaptive mean estimator based solely on randomized threshold queries, where each 1-bit outcome indicates whether a given sample exceeds a sequentially chosen threshold. Our estimator is $(ε, δ)$-PAC for any distribution with a bounded mean $μ\in [-λ, λ]$ and a bounded $k$-th central moment $\mathbb{E}[|X-μ|^k] \le σ^k$ for any fixed $k > 1$. Crucially, our sample complexity is order-optimal in all such tail regimes, i.e., for every such $k$ value. For $k \neq 2$, our estimator's sample complexity matches the unquantized minimax lower bounds plus an unavoidable $O(\log(λ/σ))$ localization cost. For the finite-variance case ($k=2$), our estimator's sample complexity has an extra multiplicative $O(\log(σ/ε))$ penalty, and we establish a novel information-theoretic lower bound showing that this penalty is a fundamental limit of 1-bit quantization. We also establish a significant adaptivity gap: for both threshold queries and more general interval queries, the sample complexity of any non-adaptive estimator must scale linearly with the search space parameter $λ/σ$, rendering it vastly less sample efficient than our adaptive approach. Finally, we present algorithmic variants that (i) handle an unknown sampling budget, (ii) adapt to an unknown scale parameter~$σ$ given (possibly loose) bounds, and (iii) require only two stages of adaptivity at the expense of more complicated general 1-bit queries.
Problem

Research questions and friction points this paper is trying to address.

1-bit communication
mean estimation
sample complexity
adaptive estimation
PAC learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

1-bit quantization
adaptive mean estimation
order-optimal sample complexity
PAC learning
information-theoretic lower bound
🔎 Similar Papers
No similar papers found.