🤖 AI Summary
To address the challenges of high-dimensional continuous action spaces, sparse rewards, and modeling temporal dynamics in robotic reinforcement learning, this paper extends Group Relative Policy Optimization (GRPO)—previously restricted to discrete actions—to continuous control. We propose a trajectory-clustering-based policy grouping mechanism, a state-aware group advantage estimation method, and a KL-regularized continuous policy update framework. We theoretically establish the convergence of the algorithm in continuous action spaces and prove its polynomial computational complexity. This work provides the first complete theoretical foundation for GRPO in continuous control, bridging a critical gap between group-based policy optimization and real-world embodied AI tasks. The resulting framework offers a scalable and robust policy optimization paradigm for applications such as legged locomotion and dexterous manipulation.
📝 Abstract
Group Relative Policy Optimization (GRPO) has shown promise in discrete action spaces by eliminating value function dependencies through group-based advantage estimation. However, its application to continuous control remains unexplored, limiting its utility in robotics where continuous actions are essential. This paper presents a theoretical framework extending GRPO to continuous control environments, addressing challenges in high-dimensional action spaces, sparse rewards, and temporal dynamics. Our approach introduces trajectory-based policy clustering, state-aware advantage estimation, and regularized policy updates designed for robotic applications. We provide theoretical analysis of convergence properties and computational complexity, establishing a foundation for future empirical validation in robotic systems including locomotion and manipulation tasks.