Robust Sparse Mean Estimation via Incremental Learning

📅 2023-05-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robust estimation of a $k$-sparse mean in high dimensions under heavy-tailed distributions, with partially corrupted samples, remains challenging: existing methods require prior knowledge of the sparsity level $k$ and suffer from high computational complexity. Method: We propose the first non-convex incremental learning framework that operates without knowing $k$ a priori. It achieves robust estimation via progressive top-$k$ support identification, iteratively refining the candidate support set. Contribution/Results: We establish theoretical guarantees showing the algorithm attains the information-theoretically optimal $ ilde{O}(k)$ sample complexity. It is the first method to achieve near-linear time and space complexity for this problem. Moreover, we derive tight lower bounds that precisely characterize the statistical–computational trade-off. Experiments demonstrate significant improvements over state-of-the-art methods under high-dimensional heavy-tailed noise. Our implementation is publicly available.
📝 Abstract
In this paper, we study the problem of robust sparse mean estimation, where the goal is to estimate a $k$-sparse mean from a collection of partially corrupted samples drawn from a heavy-tailed distribution. Existing estimators face two critical challenges in this setting. First, they are limited by a conjectured computational-statistical tradeoff, implying that any computationally efficient algorithm needs $ ildeOmega(k^2)$ samples, while its statistically-optimal counterpart only requires $ ilde O(k)$ samples. Second, the existing estimators fall short of practical use as they scale poorly with the ambient dimension. This paper presents a simple mean estimator that overcomes both challenges under moderate conditions: it runs in near-linear time and memory (both with respect to the ambient dimension) while requiring only $ ilde O(k)$ samples to recover the true mean. At the core of our method lies an incremental learning phenomenon: we introduce a simple nonconvex framework that can incrementally learn the top-$k$ nonzero elements of the mean while keeping the zero elements arbitrarily small. Unlike existing estimators, our method does not need any prior knowledge of the sparsity level $k$. We prove the optimality of our estimator by providing a matching information-theoretic lower bound. Finally, we conduct a series of simulations to corroborate our theoretical findings. Our code is available at https://github.com/huihui0902/Robust_mean_estimation.
Problem

Research questions and friction points this paper is trying to address.

Estimating sparse mean without prior sparsity knowledge
Overcoming poor scalability in high-dimensional settings
Handling heavy-tailed distributions with partial corruptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incremental learning framework for sparse mean estimation
Works without prior knowledge of sparsity level
Runs in near-linear time and memory complexity
🔎 Similar Papers
No similar papers found.