🤖 AI Summary
To address estimation bias in feature selection for reinforcement learning caused by conventional convex regularization, this paper proposes a batch policy evaluation method based on the nonconvex Penalized Minimax Concave (PMC) penalty. The method jointly minimizes the Bellman residual and enforces sparsity via a nonconvex regularizer, formulating a weakly convex optimization problem within the least-squares temporal difference (LSTD) framework. We design a Forward-Reflected Backward Splitting (FRBS) algorithm to solve it and establish, for the first time, its convergence guarantee under the generalized nonmonotone inclusion framework. Compared with existing approaches, our method achieves significantly improved feature selection accuracy and robustness in high-dimensional noisy settings, and attains state-of-the-art (SOTA) performance across multiple benchmark tasks.
📝 Abstract
This work proposes an efficient batch algorithm for feature selection in reinforcement learning (RL) with theoretical convergence guarantees. To mitigate the estimation bias inherent in conventional regularization schemes, the first contribution extends policy evaluation within the classical least-squares temporal-difference (LSTD) framework by formulating a Bellman-residual objective regularized with the sparsity-inducing, nonconvex projected minimax concave (PMC) penalty. Owing to the weak convexity of the PMC penalty, this formulation can be interpreted as a special instance of a general nonmonotone-inclusion problem. The second contribution establishes novel convergence conditions for the forward-reflected-backward splitting (FRBS) algorithm to solve this class of problems. Numerical experiments on benchmark datasets demonstrate that the proposed approach substantially outperforms state-of-the-art feature-selection methods, particularly in scenarios with many noisy features.