RL-augmented Adaptive Model Predictive Control for Bipedal Locomotion over Challenging Terrain

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited robustness of model predictive control (MPC) for bipedal robots on uneven, slippery, and otherwise complex terrains—and the challenges of constraint satisfaction, reward engineering, and sample inefficiency in reinforcement learning (RL), this paper proposes an adaptive RL-MPC hybrid control framework. The method innovatively integrates deep RL into three core MPC components: system dynamics modeling, swing-leg trajectory generation, and gait frequency adaptation—while rigorously enforcing state and input hard constraints. An efficient inner-loop MPC is built upon single-rigid-body dynamics, and end-to-end policy optimization is performed within NVIDIA IsaacLab’s high-fidelity simulation environment. Experimental results demonstrate that the proposed approach significantly improves stability, obstacle negotiation capability, and cross-terrain generalization over both conventional MPC and pure RL baselines, particularly on stairs, low-friction surfaces, and rocky terrain with scattered debris.

Technology Category

Application Category

📝 Abstract
Model predictive control (MPC) has demonstrated effectiveness for humanoid bipedal locomotion; however, its applicability in challenging environments, such as rough and slippery terrain, is limited by the difficulty of modeling terrain interactions. In contrast, reinforcement learning (RL) has achieved notable success in training robust locomotion policies over diverse terrain, yet it lacks guarantees of constraint satisfaction and often requires substantial reward shaping. Recent efforts in combining MPC and RL have shown promise of taking the best of both worlds, but they are primarily restricted to flat terrain or quadrupedal robots. In this work, we propose an RL-augmented MPC framework tailored for bipedal locomotion over rough and slippery terrain. Our method parametrizes three key components of single-rigid-body-dynamics-based MPC: system dynamics, swing leg controller, and gait frequency. We validate our approach through bipedal robot simulations in NVIDIA IsaacLab across various terrains, including stairs, stepping stones, and low-friction surfaces. Experimental results demonstrate that our RL-augmented MPC framework produces significantly more adaptive and robust behaviors compared to baseline MPC and RL.
Problem

Research questions and friction points this paper is trying to address.

Modeling terrain interactions for bipedal locomotion on challenging surfaces
Ensuring constraint satisfaction while maintaining robust locomotion performance
Combining MPC and RL advantages specifically for bipedal robots on rough terrain
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-augmented MPC for bipedal locomotion
Parametrizes dynamics, swing leg, gait frequency
Validated on stairs, stepping stones, slippery surfaces
🔎 Similar Papers
No similar papers found.
J
Junnosuke Kamohara
Georgia Institute of Technology, GA 30332, USA
Feiyang Wu
Feiyang Wu
Georgia Institute of Technology
Reinforcement LearningDeep Learning
C
Chinmayee Wamorkar
Georgia Institute of Technology, GA 30332, USA
S
Seth Hutchinson
Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA
Y
Ye Zhao
Georgia Institute of Technology, GA 30332, USA