Nonconvex Regularization for Feature Selection in Reinforcement Learning

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address estimation bias in feature selection for reinforcement learning caused by conventional convex regularization, this paper proposes a batch policy evaluation method based on the nonconvex Penalized Minimax Concave (PMC) penalty. The method jointly minimizes the Bellman residual and enforces sparsity via a nonconvex regularizer, formulating a weakly convex optimization problem within the least-squares temporal difference (LSTD) framework. We design a Forward-Reflected Backward Splitting (FRBS) algorithm to solve it and establish, for the first time, its convergence guarantee under the generalized nonmonotone inclusion framework. Compared with existing approaches, our method achieves significantly improved feature selection accuracy and robustness in high-dimensional noisy settings, and attains state-of-the-art (SOTA) performance across multiple benchmark tasks.

Technology Category

Application Category

📝 Abstract
This work proposes an efficient batch algorithm for feature selection in reinforcement learning (RL) with theoretical convergence guarantees. To mitigate the estimation bias inherent in conventional regularization schemes, the first contribution extends policy evaluation within the classical least-squares temporal-difference (LSTD) framework by formulating a Bellman-residual objective regularized with the sparsity-inducing, nonconvex projected minimax concave (PMC) penalty. Owing to the weak convexity of the PMC penalty, this formulation can be interpreted as a special instance of a general nonmonotone-inclusion problem. The second contribution establishes novel convergence conditions for the forward-reflected-backward splitting (FRBS) algorithm to solve this class of problems. Numerical experiments on benchmark datasets demonstrate that the proposed approach substantially outperforms state-of-the-art feature-selection methods, particularly in scenarios with many noisy features.
Problem

Research questions and friction points this paper is trying to address.

Addresses feature selection bias in reinforcement learning
Proposes nonconvex regularization for sparse policy evaluation
Solves nonmonotone-inclusion problems with convergence guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

PMC penalty for nonconvex regularization
FRBS algorithm for convergence guarantees
Bellman-residual objective with sparsity induction
🔎 Similar Papers
No similar papers found.
K
Kyohei Suzuki
Institute of Science Tokyo, Japan, Department of Information and Communications Engineering
Konstantinos Slavakis
Konstantinos Slavakis
Institute of Science Tokyo (ex TokyoTech), Department of Information and Communications Engineering
Signal processingMachine learning