🤖 AI Summary
This work addresses the challenge of limited generalization across diverse gaits in legged robot locomotion policies, stemming from inadequate goal representation. We propose a unified policy framework conditioned on future foot-landing sequences—discrete contact patterns—rather than explicit gait labels or kinematic targets. This is the first approach to encode contact sequences as policy conditions, leveraging shared underlying dynamics across gaits to enable end-to-end generalization to walking, trotting, bounding, and other gaits within a single policy. Our method employs imitation learning, using a model predictive controller (MPC) as the expert teacher, and jointly trains a contact-state encoder with a neural policy network. Evaluated on bipedal and quadrupedal simulation platforms, our approach achieves over 40% higher out-of-distribution gait transfer success rates compared to baseline methods, demonstrating significantly improved robustness and cross-gait generalization capability.
📝 Abstract
In this paper, we examine the effects of goal representation on the performance and generalization in multi-gait policy learning settings for legged robots. To study this problem in isolation, we cast the policy learning problem as imitating model predictive controllers that can generate multiple gaits. We hypothesize that conditioning a learned policy on future contact switches is a suitable goal representation for learning a single policy that can generate a variety of gaits. Our rationale is that policies conditioned on contact information can leverage the shared structure between different gaits. Our extensive simulation results demonstrate the validity of our hypothesis for learning multiple gaits on a bipedal and a quadrupedal robot. Most interestingly, our results show that contact-conditioned policies generalize much better than other common goal representations in the literature, when the robot is tested outside the distribution of the training data.