🤖 AI Summary
This work addresses the challenge of integrating high-level temporal logical knowledge into deep learning models for sequential sub-symbolic data, such as image sequences or non-Markovian environments. To this end, the authors propose DeepDFA, a novel framework that, for the first time, embeds deterministic finite automata (DFAs) or Moore machines as continuous, differentiable layers within deep neural networks, enabling end-to-end fusion of symbolic temporal rules and sub-symbolic learning. The differentiable DFA layer explicitly guides sequence modeling and is compatible with mainstream architectures including LSTM, GRU, and Transformer. Experimental results demonstrate that DeepDFA significantly outperforms conventional deep models and existing neuro-symbolic approaches on tasks involving image sequence classification and non-Markovian policy learning, achieving state-of-the-art performance in injecting temporal knowledge into neural systems.
📝 Abstract
Integrating logical knowledge into deep neural network training is still a hard challenge, especially for sequential or temporally extended domains involving subsymbolic observations. To address this problem, we propose DeepDFA, a neurosymbolic framework that integrates high-level temporal logic - expressed as Deterministic Finite Automata (DFA) or Moore Machines - into neural architectures. DeepDFA models temporal rules as continuous, differentiable layers, enabling symbolic knowledge injection into subsymbolic domains. We demonstrate how DeepDFA can be used in two key settings: (i) static image sequence classification, and (ii) policy learning in interactive non-Markovian environments. Across extensive experiments, DeepDFA outperforms traditional deep learning models (e.g., LSTMs, GRUs, Transformers) and novel neuro-symbolic systems, achieving state-of-the-art results in temporal knowledge integration. These results highlight the potential of DeepDFA to bridge subsymbolic learning and symbolic reasoning in sequential tasks.