PIP-Loco: A Proprioceptive Infinite Horizon Planning Framework for Quadrupedal Robot Locomotion

📅 2024-09-14
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of achieving robust, constraint-satisfying, and long-horizon adaptive locomotion control for quadrupedal robots on dynamic and complex terrains, this paper proposes a body-aware, infinite-horizon Model Predictive Control (MPC) framework. Methodologically, it integrates interpretable body-aware modeling with a Dreamer-style world model, introducing a co-evolving joint training mechanism between a velocity estimator and the Dreamer module to simultaneously optimize both policy and internal dynamics model. The key innovation lies in the first tight coupling of infinite-horizon MPC with end-to-end reinforcement learning—balancing safety, interpretability, and emergent locomotion capabilities. Extensive evaluations on multi-terrain simulations and a real-world quadrupedal robot platform demonstrate substantial improvements in locomotion robustness and cross-terrain generalization. Ablation studies confirm that each core component critically contributes to noise robustness and generalization performance.

Technology Category

Application Category

📝 Abstract
A core strength of Model Predictive Control (MPC) for quadrupedal locomotion has been its ability to enforce constraints and provide interpretability of the sequence of commands over the horizon. However, despite being able to plan, MPC struggles to scale with task complexity, often failing to achieve robust behavior on rapidly changing surfaces. On the other hand, model-free Reinforcement Learning (RL) methods have outperformed MPC on multiple terrains, showing emergent motions but inherently lack any ability to handle constraints or perform planning. To address these limitations, we propose a framework that integrates proprioceptive planning with RL, allowing for agile and safe locomotion behaviors through the horizon. Inspired by MPC, we incorporate an internal model that includes a velocity estimator and a Dreamer module. During training, the framework learns an expert policy and an internal model that are co-dependent, facilitating exploration for improved locomotion behaviors. During deployment, the Dreamer module solves an infinite-horizon MPC problem, adapting actions and velocity commands to respect the constraints. We validate the robustness of our training framework through ablation studies on internal model components and demonstrate improved robustness to training noise. Finally, we evaluate our approach across multi-terrain scenarios in both simulation and hardware.
Problem

Research questions and friction points this paper is trying to address.

Enhance quadrupedal locomotion robustness on changing terrains
Integrate proprioceptive planning with reinforcement learning
Ensure safe locomotion by respecting constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates proprioceptive planning with RL
Uses internal model with velocity estimator
Solves infinite-horizon MPC via Dreamer module
🔎 Similar Papers
No similar papers found.
Aditya Shirwatkar
Aditya Shirwatkar
PhD Student, IISc Bangalore
RoboticsRobot LearningLegged Locomotion
N
Naman Saxena
Robert Bosch Center for Cyber-Physical Systems, Indian Institute of Science, Bengaluru
K
Kishore Chandra
Robert Bosch Center for Cyber-Physical Systems, Indian Institute of Science, Bengaluru
Shishir N. Y. Kolathaya
Shishir N. Y. Kolathaya
Assistant Professor, Cyber Physical Systems, Computer Science & Automation, IISc
RoboticsNonlinear controlMachine learningHybrid systems