🤖 AI Summary
This work addresses the challenge of acquiring domain models for complex tasks involving both discrete and numeric state variables. We propose a novel paradigm integrating action model learning with numeric planning. Specifically, we design NSAM_(+p), an offline method that learns provably safe numeric action models for long-horizon tasks in Minecraft, and introduce RAMP, an online collaborative optimization framework that establishes a positive-feedback loop between model learning and policy optimization. Our empirical evaluation—first of its kind—demonstrates that the learned numeric domain models substantially improve success rates on long-horizon tasks and enhance generalization across environments of varying scales. Moreover, RAMP outperforms pure reinforcement learning baselines by generating higher-quality plans and solving a greater number of tasks. This work advances model-driven embodied intelligence by providing a scalable and formally verifiable approach to numeric planning.
📝 Abstract
Automated Planning algorithms require a model of the domain that specifies the preconditions and effects of each action. Obtaining such a domain model is notoriously hard. Algorithms for learning domain models exist, yet it remains unclear whether learning a domain model and planning is an effective approach for numeric planning environments, i.e., where states include discrete and numeric state variables. In this work, we explore the benefits of learning a numeric domain model and compare it with alternative model-free solutions. As a case study, we use two tasks in Minecraft, a popular sandbox game that has been used as an AI challenge. First, we consider an offline learning setting, where a set of expert trajectories are available to learn from. This is the standard setting for learning domain models. We used the Numeric Safe Action Model Learning (NSAM) algorithm to learn a numeric domain model and solve new problems with the learned domain model and a numeric planner. We call this model-based solution NSAM_(+p), and compare it to several model-free Imitation Learning (IL) and Offline Reinforcement Learning (RL) algorithms. Empirical results show that some IL algorithms can learn faster to solve simple tasks, while NSAM_(+p) allows solving tasks that require long-term planning and enables generalizing to solve problems in larger environments. Then, we consider an online learning setting, where learning is done by moving an agent in the environment. For this setting, we introduce RAMP. In RAMP, observations collected during the agent's execution are used to simultaneously train an RL policy and learn a planning domain action model. This forms a positive feedback loop between the RL policy and the learned domain model. We demonstrate experimentally the benefits of using RAMP, showing that it finds more efficient plans and solves more problems than several RL baselines.