🤖 AI Summary
To address low sample efficiency and poor adaptability of hierarchical planning in complex dynamic environments, this paper proposes an adaptive hierarchical planning framework that bidirectionally couples reinforcement learning (RL) with Model Predictive Path Integral (MPPI) control. The method employs an RL policy to guide MPPI sampling while dynamically modulating exploration intensity based on value-function uncertainty. It further introduces hierarchical policy value estimation and an uncertainty-aware adaptive sampling mechanism. Evaluated on challenging benchmarks—including autonomous racing, a modified Acrobot task, and obstacle-aware lunar landing—the framework achieves up to a 72% improvement in task success rate over baseline methods, accelerates convergence by 2.1×, and significantly enhances sample efficiency. Moreover, the learned policies exhibit improved robustness against environmental perturbations and stronger cross-task generalization capability.
📝 Abstract
We propose a new approach for solving planning problems with a hierarchical structure, fusing reinforcement learning and MPC planning. Our formulation tightly and elegantly couples the two planning paradigms. It leverages reinforcement learning actions to inform the MPPI sampler, and adaptively aggregates MPPI samples to inform the value estimation. The resulting adaptive process leverages further MPPI exploration where value estimates are uncertain, and improves training robustness and the overall resulting policies. This results in a robust planning approach that can handle complex planning problems and easily adapts to different applications, as demonstrated over several domains, including race driving, modified Acrobot, and Lunar Lander with added obstacles. Our results in these domains show better data efficiency and overall performance in terms of both rewards and task success, with up to a 72% increase in success rate compared to existing approaches, as well as accelerated convergence (x2.1) compared to non-adaptive sampling.