Minimum mean-squared error estimation with bandit feedback

📅 2022-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies sequential mean-squared error (MSE) estimation and optimal $m$-dimensional subset identification for a $K$-dimensional Gaussian vector, under a feedback-constrained setting where only $m < K$ components are observable per round. To address this problem, we propose the first feedback-aware estimation framework tailored for MSE-optimal subset identification. We design an adaptive regression-based estimator and an enhanced successive elimination algorithm, substantially improving both estimation accuracy and subset identification reliability. Leveraging concentration inequalities and minimax theory, we derive a tight lower bound on sample complexity. Theoretically, our estimator exhibits superior concentration properties; the algorithm identifies the MSE-optimal $m$-dimensional subset with high probability; and we establish the fundamental sample-efficiency limit for this task.
📝 Abstract
We consider the problem of sequentially learning to estimate, in the mean squared error (MSE) sense, a Gaussian $K$-vector of unknown covariance by observing only $m<K$ of its entries in each round. We propose two MSE estimators, and analyze their concentration properties. The first estimator is non-adaptive, as it is tied to a predetermined $m$-subset and lacks the flexibility to transition to alternative subsets. The second estimator, which is derived using a regression framework, is adaptive and exhibits better concentration bounds in comparison to the first estimator. We frame the MSE estimation problem with bandit feedback, where the objective is to find the MSE-optimal subset with high confidence. We propose a variant of the successive elimination algorithm to solve this problem. We also derive a minimax lower bound to understand the fundamental limit on the sample complexity of this problem.
Problem

Research questions and friction points this paper is trying to address.

Sequentially learning to estimate a Gaussian K-vector with MSE
Proposing adaptive and non-adaptive MSE estimators under bandit feedback
Finding MSE-optimal subset with high confidence using successive elimination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-adaptive MSE estimator for fixed subsets
Adaptive MSE estimator via regression framework
Successive elimination algorithm for optimal subset selection
🔎 Similar Papers
2024-02-05arXiv.orgCitations: 1
2024-10-02International Conference on Machine LearningCitations: 1
A
Ayon Ghosh
Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai
L
L. A. Prashanth
Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai
D
Dipayan Sen
Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai
A
Aditya Gopalan
Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore