🤖 AI Summary
Safe locomotion of quadrupedal robots over unknown, complex terrains requires reliable foothold selection and trajectory planning under uncertainty.
Method: This paper proposes an end-to-end traversability prediction framework operating directly in image space. It models geometric traversability as a visual-semantic implicit representation and tightly integrates it into a coupled planning architecture combining A*-based graph search with sequential quadratic programming (SQP) for nonlinear trajectory optimization. A synthetic data generation pipeline—built upon primitive geometric shapes—enables self-supervised traversability learning without real-world annotations. A deep semantic segmentation network, augmented with simulation-to-reality transfer techniques, supports robust inference.
Results: Evaluated in simulation and on the ANYmal hardware platform, the method achieves a 37% improvement in planning success rate, sub-80 ms per-step decision latency, and online dynamic replanning capability. It significantly enhances robustness across challenging terrains—including gravel, slopes, and highly irregular surfaces.
📝 Abstract
In this work, we introduce a method for predicting environment steppability -- the ability of a legged robot platform to place a foothold at a particular location in the local environment -- in the image space. This novel environment representation captures this critical geometric property of the local terrain while allowing us to exploit the computational benefits of sensing and planning in the image space. We adapt a primitive shapes-based synthetic data generation scheme to create geometrically rich and diverse simulation scenes and extract ground truth semantic information in order to train a steppability model. We then integrate this steppability model into an existing interleaved graph search and trajectory optimization-based footstep planner to demonstrate how this steppability paradigm can inform footstep planning in complex, unknown environments. We analyze the steppability model performance to demonstrate its validity, and we deploy the perception-informed footstep planner both in offline and online settings to experimentally verify planning performance.