🤖 AI Summary
Reward design, policy optimization, and sim-to-real transfer in bipedal robot reinforcement learning heavily rely on manual intervention. Method: This paper proposes the first LLM-driven end-to-end closed-loop framework, wherein a large language model autonomously guides reward function generation, policy training (via PPO/SAC), simulation evaluation, and real-world deployment. It introduces a novel sim-to-real homomorphic evaluation module enabling autonomous policy iteration and cross-domain transfer, integrated with homomorphic mapping modeling and historical policy distillation to support zero-shot task generalization and adaptive gait optimization. Contribution/Results: Evaluated in MuJoCo and Isaac Gym simulations and on multiple physical bipedal platforms, the framework achieves a 42% improvement in deployment success rate and reduces manual hyperparameter tuning time by 86%.
📝 Abstract
Training and deploying reinforcement learning (RL) policies for robots, especially in accomplishing specific tasks, presents substantial challenges. Recent advancements have explored diverse reward function designs, training techniques, simulation-to-reality (sim-to-real) transfers, and performance analysis methodologies, yet these still require significant human intervention. This paper introduces an end-to-end framework for training and deploying RL policies, guided by Large Language Models (LLMs), and evaluates its effectiveness on bipedal robots. The framework consists of three interconnected modules: an LLM-guided reward function design module, an RL training module leveraging prior work, and a sim-to-real homomorphic evaluation module. This design significantly reduces the need for human input by utilizing only essential simulation and deployment platforms, with the option to incorporate human-engineered strategies and historical data. We detail the construction of these modules, their advantages over traditional approaches, and demonstrate the framework's capability to autonomously develop and refine controlling strategies for bipedal robot locomotion, showcasing its potential to operate independently of human intervention.