Learning to Plan, Planning to Learn: Adaptive Hierarchical RL-MPC for Sample-Efficient Decision Making

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low sample efficiency and poor adaptability of hierarchical planning in complex dynamic environments, this paper proposes an adaptive hierarchical planning framework that bidirectionally couples reinforcement learning (RL) with Model Predictive Path Integral (MPPI) control. The method employs an RL policy to guide MPPI sampling while dynamically modulating exploration intensity based on value-function uncertainty. It further introduces hierarchical policy value estimation and an uncertainty-aware adaptive sampling mechanism. Evaluated on challenging benchmarks—including autonomous racing, a modified Acrobot task, and obstacle-aware lunar landing—the framework achieves up to a 72% improvement in task success rate over baseline methods, accelerates convergence by 2.1×, and significantly enhances sample efficiency. Moreover, the learned policies exhibit improved robustness against environmental perturbations and stronger cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
We propose a new approach for solving planning problems with a hierarchical structure, fusing reinforcement learning and MPC planning. Our formulation tightly and elegantly couples the two planning paradigms. It leverages reinforcement learning actions to inform the MPPI sampler, and adaptively aggregates MPPI samples to inform the value estimation. The resulting adaptive process leverages further MPPI exploration where value estimates are uncertain, and improves training robustness and the overall resulting policies. This results in a robust planning approach that can handle complex planning problems and easily adapts to different applications, as demonstrated over several domains, including race driving, modified Acrobot, and Lunar Lander with added obstacles. Our results in these domains show better data efficiency and overall performance in terms of both rewards and task success, with up to a 72% increase in success rate compared to existing approaches, as well as accelerated convergence (x2.1) compared to non-adaptive sampling.
Problem

Research questions and friction points this paper is trying to address.

Fuses reinforcement learning and MPC for hierarchical planning
Adaptively aggregates samples to improve value estimation robustness
Handles complex planning problems with better data efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical RL-MPC fusion for adaptive planning
MPPI sampler informed by reinforcement learning actions
Adaptive sample aggregation improves value estimation
🔎 Similar Papers
No similar papers found.