๐ค AI Summary
Humanoid robots exhibit limited generalization in locomotion-manipulation tasks across diverse, unstructured environments; existing approaches rely on task-specific fine-tuning, compromising generality and scalability. This paper proposes SkillBlender, a hierarchical reinforcement learning framework that pretrains goal-conditioned, task-agnostic primitive skills and introduces a novel dynamic skill blending mechanism for complex task adaptation with minimal reward engineering. To mitigate reward hacking, it incorporates behavior-natural regularization, and employs optimal-control-inspired constraints to enhance motion feasibility. We further present SkillBenchโthe first cross-morphology, parallelized, multi-task simulation benchmark for humanoid loco-manipulation, comprising eight challenging tasks. Experiments demonstrate that SkillBlender significantly outperforms all baselines across all tasks and generalizes effectively across three distinct humanoid morphologies. The code and SkillBench benchmark will be publicly released.
๐ Abstract
Humanoid robots hold significant potential in accomplishing daily tasks across diverse environments thanks to their flexibility and human-like morphology. Recent works have made significant progress in humanoid whole-body control and loco-manipulation leveraging optimal control or reinforcement learning. However, these methods require tedious task-specific tuning for each task to achieve satisfactory behaviors, limiting their versatility and scalability to diverse tasks in daily scenarios. To that end, we introduce SkillBlender, a novel hierarchical reinforcement learning framework for versatile humanoid loco-manipulation. SkillBlender first pretrains goal-conditioned task-agnostic primitive skills, and then dynamically blends these skills to accomplish complex loco-manipulation tasks with minimal task-specific reward engineering. We also introduce SkillBench, a parallel, cross-embodiment, and diverse simulated benchmark containing three embodiments, four primitive skills, and eight challenging loco-manipulation tasks, accompanied by a set of scientific evaluation metrics balancing accuracy and feasibility. Extensive simulated experiments show that our method significantly outperforms all baselines, while naturally regularizing behaviors to avoid reward hacking, resulting in more accurate and feasible movements for diverse loco-manipulation tasks in our daily scenarios. Our code and benchmark will be open-sourced to the community to facilitate future research. Project page: https://usc-gvl.github.io/SkillBlender-web/.