Hybrid TD3: Overestimation Bias Analysis and Stable Policy Optimization for Hybrid Action Space

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Hybrid TD3, a novel reinforcement learning algorithm designed to address overestimation bias and training instability in discrete-continuous hybrid action spaces. Building upon the TD3 framework, the study provides the first formal analysis of Q-value overestimation bias in such hybrid settings and establishes an ordering of bias magnitudes across five algorithmic variants. To mitigate these issues, Hybrid TD3 introduces a weighted-clipped Q-learning objective combined with a marginalization mechanism over the discrete action distribution, enabling joint optimization of high-level decision-making and low-level control. Experimental results across multiple robotic manipulation tasks demonstrate that Hybrid TD3 significantly enhances training stability and achieves superior performance compared to existing hybrid-action baselines, particularly in high-dimensional action spaces and under domain randomization.

Technology Category

Application Category

📝 Abstract
Reinforcement learning in discrete-continuous hybrid action spaces presents fundamental challenges for robotic manipulation, where high-level task decisions and low-level joint-space execution must be jointly optimized. Existing approaches either discretize continuous components or relax discrete choices into continuous approximations, which suffer from scalability limitations and training instability in high-dimensional action spaces and under domain randomization. In this paper, we propose Hybrid TD3, an extension of Twin Delayed Deep Deterministic Policy Gradient (TD3) that natively handles parameterized hybrid action spaces in a principled manner. We conduct a rigorous theoretical analysis of overestimation bias in hybrid action settings, deriving formal bounds under twin-critic architectures and establishing a complete bias ordering across five algorithmic variants. Building on this analysis, we introduce a weighted clipped Q-learning target that marginalizes over the discrete action distribution, achieving equivalent bias reduction to standard clipped minimization while improving policy smoothness. Experimental results demonstrate that Hybrid TD3 achieves superior training stability and competitive performance against state-of-the-art hybrid action baselines
Problem

Research questions and friction points this paper is trying to address.

hybrid action space
reinforcement learning
overestimation bias
training instability
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Action Space
Overestimation Bias
Twin-Critic Architecture
Weighted Clipped Q-learning
Policy Smoothness
T
Thanh-Tuan Tran
University of Engineering and Technology, Vietnam National University, 10000, Hanoi, Vietnam
T
Thanh Nguyen Canh
School of Information Science, Japan Advanced Institute of Science and Technology, Nomi, 923-1211, Ishikawa, Japan
Nak Young Chong
Nak Young Chong
Professor of Information Science, JAIST
Robotics
X
Xiem HoangVan
University of Engineering and Technology, Vietnam National University, 10000, Hanoi, Vietnam