🤖 AI Summary
This paper studies policy evaluation in reinforcement learning under adversarial reward corruption: when rewards are maliciously perturbed with probability ε according to the Huber contamination model, standard temporal-difference (TD) algorithms fail. To address this, we propose Robust-TD—a novel robust TD algorithm. Our contributions are threefold: (i) We establish, for the first time, finite-time upper bounds and information-theoretic minimax lower bounds for robust TD learning under Markov-dependent data; (ii) We prove that an O(ε) robust estimation error is unavoidable and inherently tight; (iii) We introduce Median-of-Means—a technique adapted to handle temporal dependence in corrupted rewards—enabling provably robust estimation. Theoretically, Robust-TD achieves an O(ε) robust error bound while retaining the optimal O(1/T) convergence rate of non-robust TD, thereby significantly enhancing reliability and theoretical completeness of policy evaluation in adversarial environments.
📝 Abstract
One of the most basic problems in reinforcement learning (RL) is policy evaluation: estimating the long-term return, i.e., value function, corresponding to a given fixed policy. The celebrated Temporal Difference (TD) learning algorithm addresses this problem, and recent work has investigated finite-time convergence guarantees for this algorithm and variants thereof. However, these guarantees hinge on the reward observations being always generated from a well-behaved (e.g., sub-Gaussian) true reward distribution. Motivated by harsh, real-world environments where such an idealistic assumption may no longer hold, we revisit the policy evaluation problem from the perspective of adversarial robustness. In particular, we consider a Huber-contaminated reward model where an adversary can arbitrarily corrupt each reward sample with a small probability $epsilon$. Under this observation model, we first show that the adversary can cause the vanilla TD algorithm to converge to any arbitrary value function. We then develop a novel algorithm called Robust-TD and prove that its finite-time guarantees match that of vanilla TD with linear function approximation up to a small $O(epsilon)$ term that captures the effect of corruption. We complement this result with a minimax lower bound, revealing that such an additive corruption-induced term is unavoidable. To our knowledge, these results are the first of their kind in the context of adversarial robustness of stochastic approximation schemes driven by Markov noise. The key new technical tool that enables our results is an analysis of the Median-of-Means estimator with corrupted, time-correlated data that might be of independent interest to the literature on robust statistics.