SymDQN: Symbolic Knowledge and Reasoning in Neural Network-based Reinforcement Learning

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak interpretability and poor logical consistency plague neural reinforcement learning. To address this, we propose SymDQN—a novel neuro-symbolic architecture that deeply integrates a Logic Tensor Network (LTN)-driven symbolic reasoning module into a Dueling DQN backbone, enabling symbolic guidance of action policies and semantic reasoning over environment states. SymDQN supports shape recognition, reward prediction, and behavior constraints, embedding verifiable symbolic semantics within an end-to-end learning framework. Evaluated on a 5×5 grid navigation task, SymDQN achieves significant improvements in sample efficiency and policy accuracy. Ablation studies confirm that each symbolic component critically contributes to both performance gains and behavioral consistency. This work establishes a modular, scalable, and interpretable paradigm for neuro-symbolic reinforcement learning, advancing the integration of differentiable neural computation with formal symbolic reasoning.

Technology Category

Application Category

📝 Abstract
We propose a learning architecture that allows symbolic control and guidance in reinforcement learning with deep neural networks. We introduce SymDQN, a novel modular approach that augments the existing Dueling Deep Q-Networks (DuelDQN) architecture with modules based on the neuro-symbolic framework of Logic Tensor Networks (LTNs). The modules guide action policy learning and allow reinforcement learning agents to display behaviour consistent with reasoning about the environment. Our experiment is an ablation study performed on the modules. It is conducted in a reinforcement learning environment of a 5x5 grid navigated by an agent that encounters various shapes, each associated with a given reward. The underlying DuelDQN attempts to learn the optimal behaviour of the agent in this environment, while the modules facilitate shape recognition and reward prediction. We show that our architecture significantly improves learning, both in terms of performance and the precision of the agent. The modularity of SymDQN allows reflecting on the intricacies and complexities of combining neural and symbolic approaches in reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

Integrates symbolic reasoning into neural reinforcement learning
Enhances DuelDQN with Logic Tensor Networks for policy guidance
Improves agent performance and precision in grid navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augments DuelDQN with Logic Tensor Networks
Guides action policy via symbolic reasoning
Improves learning performance and precision
🔎 Similar Papers
No similar papers found.