🤖 AI Summary
This paper studies sequential mean-squared error (MSE) estimation and optimal $m$-dimensional subset identification for a $K$-dimensional Gaussian vector, under a feedback-constrained setting where only $m < K$ components are observable per round. To address this problem, we propose the first feedback-aware estimation framework tailored for MSE-optimal subset identification. We design an adaptive regression-based estimator and an enhanced successive elimination algorithm, substantially improving both estimation accuracy and subset identification reliability. Leveraging concentration inequalities and minimax theory, we derive a tight lower bound on sample complexity. Theoretically, our estimator exhibits superior concentration properties; the algorithm identifies the MSE-optimal $m$-dimensional subset with high probability; and we establish the fundamental sample-efficiency limit for this task.
📝 Abstract
We consider the problem of sequentially learning to estimate, in the mean squared error (MSE) sense, a Gaussian $K$-vector of unknown covariance by observing only $m<K$ of its entries in each round. We propose two MSE estimators, and analyze their concentration properties. The first estimator is non-adaptive, as it is tied to a predetermined $m$-subset and lacks the flexibility to transition to alternative subsets. The second estimator, which is derived using a regression framework, is adaptive and exhibits better concentration bounds in comparison to the first estimator. We frame the MSE estimation problem with bandit feedback, where the objective is to find the MSE-optimal subset with high confidence. We propose a variant of the successive elimination algorithm to solve this problem. We also derive a minimax lower bound to understand the fundamental limit on the sample complexity of this problem.