🤖 AI Summary
In high-degree-of-freedom robotic skill learning, tightly coupled multi-constraint reward functions impede efficient policy optimization and limit performance. To address this, we propose an LLM-driven dynamic hybrid reward scheduling framework. Our method leverages large language models to generate semantic rules and prompt-guided weight selection, while employing a multi-branch value network to enable online, adaptive assessment of individual reward component importance—facilitating progressive, structured skill acquisition. Unlike conventional static weighting or monolithic optimization paradigms, our framework is the first to integrate LLMs into the reward scheduling feedback loop, enabling semantic-aware, time-varying reward weighting. Evaluated across multiple complex robotic control tasks, it achieves an average performance improvement of 6.48%, alongside significantly enhanced training stability and policy generalization.
📝 Abstract
Enabling a high-degree-of-freedom robot to learn specific skills is a challenging task due to the complexity of robotic dynamics. Reinforcement learning (RL) has emerged as a promising solution; however, addressing such problems requires the design of multiple reward functions to account for various constraints in robotic motion. Existing approaches typically sum all reward components indiscriminately to optimize the RL value function and policy. We argue that this uniform inclusion of all reward components in policy optimization is inefficient and limits the robot's learning performance. To address this, we propose an Automated Hybrid Reward Scheduling (AHRS) framework based on Large Language Models (LLMs). This paradigm dynamically adjusts the learning intensity of each reward component throughout the policy optimization process, enabling robots to acquire skills in a gradual and structured manner. Specifically, we design a multi-branch value network, where each branch corresponds to a distinct reward component. During policy optimization, each branch is assigned a weight that reflects its importance, and these weights are automatically computed based on rules designed by LLMs. The LLM generates a rule set in advance, derived from the task description, and during training, it selects a weight calculation rule from the library based on language prompts that evaluate the performance of each branch. Experimental results demonstrate that the AHRS method achieves an average 6.48% performance improvement across multiple high-degree-of-freedom robotic tasks.