Extending Group Relative Policy Optimization to Continuous Control: A Theoretical Framework for Robotic Reinforcement Learning

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of high-dimensional continuous action spaces, sparse rewards, and modeling temporal dynamics in robotic reinforcement learning, this paper extends Group Relative Policy Optimization (GRPO)—previously restricted to discrete actions—to continuous control. We propose a trajectory-clustering-based policy grouping mechanism, a state-aware group advantage estimation method, and a KL-regularized continuous policy update framework. We theoretically establish the convergence of the algorithm in continuous action spaces and prove its polynomial computational complexity. This work provides the first complete theoretical foundation for GRPO in continuous control, bridging a critical gap between group-based policy optimization and real-world embodied AI tasks. The resulting framework offers a scalable and robust policy optimization paradigm for applications such as legged locomotion and dexterous manipulation.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) has shown promise in discrete action spaces by eliminating value function dependencies through group-based advantage estimation. However, its application to continuous control remains unexplored, limiting its utility in robotics where continuous actions are essential. This paper presents a theoretical framework extending GRPO to continuous control environments, addressing challenges in high-dimensional action spaces, sparse rewards, and temporal dynamics. Our approach introduces trajectory-based policy clustering, state-aware advantage estimation, and regularized policy updates designed for robotic applications. We provide theoretical analysis of convergence properties and computational complexity, establishing a foundation for future empirical validation in robotic systems including locomotion and manipulation tasks.
Problem

Research questions and friction points this paper is trying to address.

Extend GRPO to continuous control for robotics
Address high-dimensional action spaces and sparse rewards
Develop trajectory clustering and state-aware advantage estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends GRPO to continuous control environments
Introduces trajectory-based policy clustering
Uses state-aware advantage estimation
🔎 Similar Papers
No similar papers found.
R
Rajat Khanda
University of Houston, Houston, TX, USA
Mohammad Baqar
Mohammad Baqar
Software Engineer at Cisco Systems Inc
Sambuddha Chakrabarti
Sambuddha Chakrabarti
Princeton University, New Jersey, USA
S
Satyasaran Changdar
University of Copenhagen, Copenhagen, Denmark