🤖 AI Summary
This work proposes an end-to-end differentiable neuro-symbolic reasoning architecture that overcomes the non-differentiable boundary between neural perception and discrete symbolic solvers, which traditionally impedes backpropagating constraint-satisfaction signals to perceptual modules. By softening the immediate consequence operator \(T_P\) of Answer Set Programming (ASP), the model enables continuous constraint reasoning without relying on external solvers. It innovatively dispenses with positional encodings, instead employing structure embeddings based on constraint-group membership to guarantee invariance under arbitrary variable permutations. The approach integrates softened fixed-point iterations with a constraint-aware attention mechanism. Evaluated on Visual Sudoku, the model achieves 99.89% cell-level accuracy and 100% constraint satisfaction; on MNIST addition tasks with 2, 4, and 8 addends, it exceeds 99.7% digit-level accuracy across all settings.
📝 Abstract
Neuro-symbolic artificial intelligence (AI) systems typically couple a neural perception module to a discrete symbolic solver through a non-differentiable boundary, preventing constraint-satisfaction feedback from reaching the perception encoder during training. We introduce AS2 (Attention-Based Soft Answer Sets), a fully differentiable neuro-symbolic architecture that replaces the discrete solver with a soft, continuous approximation of the Answer Set Programming (ASP) immediate consequence operator $T_P$. AS2 maintains per-position probability distributions over a finite symbol domain throughout the forward pass and trains end-to-end by minimizing the fixed-point residual of a probabilistic lift of $T_P$, thereby differentiating through the constraint check without invoking an external solver at either training or inference time. The architecture is entirely free of conventional positional embeddings. Instead, it encodes problem structure through constraint-group membership embeddings that directly reflect the declarative ASP specification, making the model agnostic to arbitrary position indexing. On Visual Sudoku, AS2 achieves 99.89% cell accuracy and 100% constraint satisfaction (verified by Clingo) across 1,000 test boards, using a greedy constrained decoding procedure that requires no external solver. On MNIST Addition with $N \in \{2, 4, 8\}$ addends, AS2 achieves digit accuracy above 99.7% across all scales. These results demonstrate that a soft differentiable fixpoint operator, combined with constraint-aware attention and declarative constraint specification, can match or exceed pipeline and solver-based neuro-symbolic systems while maintaining full end-to-end differentiability.