🤖 AI Summary
This work addresses the challenges of instability and poor sim-to-real transfer in quadrupedal robots operating in real-world environments, which arise from mismatches between high-level navigation goals and low-level gait scales, as well as distribution shifts due to out-of-distribution terrain variations. To tackle these issues, the authors propose a hierarchical reinforcement learning architecture wherein a high-level policy generates executable subgoals based on sparse semantic or geometric terrain cues, while a low-level policy achieves precise locomotion through gait-conditioned control. The framework incorporates an explicit policy interface that enables runtime parameter tuning, diagnostics, and optimization during deployment. Furthermore, a performance-driven, structured curriculum learning mechanism is integrated to substantially enhance adaptability to unseen terrains. Experimental results demonstrate a significant improvement in task success rates across mixed and out-of-distribution terrains, thereby boosting navigation robustness and generalization capability.
📝 Abstract
Real-world quadruped navigation is constrained by a scale mismatch between high-level navigation decisions and low-level gait execution, as well as by instabilities under out-of-distribution environmental changes. Such variations challenge sim-to-real transfer and can trigger falls when policies lack explicit interfaces for adaptation. In this paper, we present a hierarchical policy architecture for quadrupedal navigation, termed Task-level Decision to Gait Control (TDGC). A low-level policy, trained with reinforcement learning in simulation, delivers gait-conditioned locomotion and maps task requirements to a compact set of controllable behavior parameters, enabling robust mode generation and smooth switching. A high-level policy makes task-centric decisions from sparse semantic or geometric terrain cues and translates them into low-level targets, forming a traceable decision pipeline without dense maps or high-resolution terrain reconstruction. Different from end-to-end approaches, our architecture provides explicit interfaces for deployment-time tuning, fault diagnosis, and policy refinement. We introduce a structured curriculum with performance-driven progression that expands environmental difficulty and disturbance ranges. Experiments show higher task success rates on mixed terrains and out-of-distribution tests.