Neuro-Symbolic Imitation Learning: Discovering Symbolic Abstractions for Skill Learning

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing imitation learning approaches struggle with long-horizon, multi-step robotic tasks due to their inability to abstractly model skills and perform hierarchical planning. This paper introduces a novel hierarchical imitation learning framework that, for the first time, integrates variational symbolic induction into the imitation learning pipeline, jointly learning a low-level neural execution policy and a high-level symbolic task planner. Skill representations are encoded via graph neural networks, and both components are co-optimized using hierarchical reinforcement learning. Evaluated on three simulated robotic tasks, the method achieves significant improvements in sample efficiency (+42%), cross-task generalization (+31%), and decision interpretability. By unifying neural execution with symbolic reasoning in a hierarchical architecture, our approach establishes a new paradigm for learning complex, long-horizon behaviors—balancing expressiveness, generalization, and transparency.

Technology Category

Application Category

📝 Abstract
Imitation learning is a popular method for teaching robots new behaviors. However, most existing methods focus on teaching short, isolated skills rather than long, multi-step tasks. To bridge this gap, imitation learning algorithms must not only learn individual skills but also an abstract understanding of how to sequence these skills to perform extended tasks effectively. This paper addresses this challenge by proposing a neuro-symbolic imitation learning framework. Using task demonstrations, the system first learns a symbolic representation that abstracts the low-level state-action space. The learned representation decomposes a task into easier subtasks and allows the system to leverage symbolic planning to generate abstract plans. Subsequently, the system utilizes this task decomposition to learn a set of neural skills capable of refining abstract plans into actionable robot commands. Experimental results in three simulated robotic environments demonstrate that, compared to baselines, our neuro-symbolic approach increases data efficiency, improves generalization capabilities, and facilitates interpretability.
Problem

Research questions and friction points this paper is trying to address.

Bridging gap between short isolated skills and long multi-step tasks
Learning symbolic abstractions for effective skill sequencing
Improving data efficiency and generalization in robotic imitation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic framework abstracts low-level state-action space
Symbolic planning decomposes tasks into subtasks
Neural skills refine abstract plans into commands
🔎 Similar Papers
No similar papers found.
L
Leon Keller
Intelligent Autonomous Systems, TU Darmstadt, Germany
Daniel Tanneberg
Daniel Tanneberg
Senior Scientist @ Honda Research Institute EU, PhD from IAS @ TU Darmstadt
Artificial IntelligenceMachine LearningRobotics
J
Jan Peters
Intelligent Autonomous Systems, TU Darmstadt, Germany; German Research Center for AI, Germany; Hessian Centre for Artificial Intelligence, Germany