🤖 AI Summary
Neural-symbolic models face challenges in learning interpretable symbolic structures from sub-symbolic data. Method: This paper proposes the Interpretable Neural Decision Tree (NDT) and its structure-learning algorithm, NeuID3. NeuID3 explicitly incorporates logical prior knowledge into decision tree induction by unifying DeepProbLog’s probabilistic logic representation with a topologically enhanced ID3 induction mechanism, enabling joint optimization of neural components (for processing sub-symbolic inputs such as images) and symbolic components (logic rules). Contribution/Results: Compared to purely neural approaches, NDT achieves significantly improved generalization, few-shot reasoning capability, and interpretability on multi-class symbol-visual hybrid tasks. It preserves structural readability—enabling human-understandable rule extraction—while maintaining perceptual robustness against input perturbations.
📝 Abstract
Neurosymbolic (NeSy) AI studies the integration of neural networks (NNs) and symbolic reasoning based on logic. Usually, NeSy techniques focus on learning the neural, probabilistic and/or fuzzy parameters of NeSy models. Learning the symbolic or logical structure of such models has, so far, received less attention. We introduce neurosymbolic decision trees (NDTs), as an extension of decision trees together with a novel NeSy structure learning algorithm, which we dub NeuID3. NeuID3 adapts the standard top-down induction of decision tree algorithms and combines it with a neural probabilistic logic representation, inherited from the DeepProbLog family of models. The key advantage of learning NDTs with NeuID3 is the support of both symbolic and subsymbolic data (such as images), and that they can exploit background knowledge during the induction of the tree structure, In our experimental evaluation we demonstrate the benefits of NeSys structure learning over more traditonal approaches such as purely data-driven learning with neural networks.