Logic of Hypotheses: from Zero to Full Knowledge in Neurosymbolic Integration

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fragmentation in neural-symbolic integration (NeSy) by unifying two dominant paradigms: expert-knowledge-driven rule injection and data-driven symbolic rule induction. We propose the Logic of Hypotheses (LoH), a novel formal language that extends propositional logic with learnable selection operators, and integrates Gödel fuzzy logic with Gödel’s encoding technique to enable end-to-end differentiable coupling of neural networks and symbolic reasoning. LoH supports progressive modeling—from zero prior knowledge to full symbolic grounding—while preserving interpretability, enabling arbitrary-degree knowledge injection, and admitting lossless discretization into Boolean functions. Via differentiable compilation, LoH formulas are translated into computational graphs amenable to backpropagation. Experiments on tabular datasets and visual tic-tac-toe demonstrate significant performance gains and yield high-fidelity, human-readable rules, validating LoH’s effectiveness, generalizability, and interpretability.

Technology Category

Application Category

📝 Abstract
Neurosymbolic integration (NeSy) blends neural-network learning with symbolic reasoning. The field can be split between methods injecting hand-crafted rules into neural models, and methods inducing symbolic rules from data. We introduce Logic of Hypotheses (LoH), a novel language that unifies these strands, enabling the flexible integration of data-driven rule learning with symbolic priors and expert knowledge. LoH extends propositional logic syntax with a choice operator, which has learnable parameters and selects a subformula from a pool of options. Using fuzzy logic, formulas in LoH can be directly compiled into a differentiable computational graph, so the optimal choices can be learned via backpropagation. This framework subsumes some existing NeSy models, while adding the possibility of arbitrary degrees of knowledge specification. Moreover, the use of Goedel fuzzy logic and the recently developed Goedel trick yields models that can be discretized to hard Boolean-valued functions without any loss in performance. We provide experimental analysis on such models, showing strong results on tabular data and on the Visual Tic-Tac-Toe NeSy task, while producing interpretable decision rules.
Problem

Research questions and friction points this paper is trying to address.

Unifying rule injection and induction in neurosymbolic integration
Enabling flexible integration of learned rules with symbolic priors
Allowing arbitrary knowledge specification degrees in neural models
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoH unifies rule injection and induction via choice operator
LoH formulas compile into differentiable graphs via fuzzy logic
Models discretize without loss using Goedel fuzzy logic
🔎 Similar Papers
No similar papers found.