🤖 AI Summary
Large language models struggle with NP-hard discrete reasoning and combinatorial optimization problems due to their inherent limitations in handling hard logical constraints and discrete search spaces.
Method: This paper proposes a differentiable neural-symbolic architecture that tightly integrates symbolic constraint modeling with neural representation learning. A novel probabilistic loss function enables joint end-to-end learning of objectives and constraints, while crucially decoupling the combinatorial solver from the training loop—using it only during inference to preserve interpretability, training efficiency, and solution accuracy.
Contribution/Results: Experiments demonstrate significantly accelerated training on Sudoku and vision-based Min-Cut/Max-Cut tasks; in protein energy optimization design, the method achieves efficient, verifiably exact solutions. By eliminating solver involvement in backpropagation and retaining it solely for constrained decoding, our approach establishes a new practical paradigm for neural-symbolic systems in complex constrained optimization.
📝 Abstract
In the ongoing quest for hybridizing discrete reasoning with neural nets, there is an increasing interest in neural architectures that can learn how to solve discrete reasoning or optimization problems from natural inputs, a task that Large Language Models seem to struggle with.
Objectives: We introduce a differentiable neuro-symbolic architecture and a loss function dedicated to learning how to solve NP-hard reasoning problems.
Methods: Our new probabilistic loss allows for learning both the constraints and the objective, thus delivering a complete model that can be scrutinized and completed with side constraints. By pushing the combinatorial solver out of the training loop, our architecture also offers scalable training while exact inference gives access to maximum accuracy.
Results: We empirically show that it can efficiently learn how to solve NP-hard reasoning problems from natural inputs. On three variants of the Sudoku benchmark -- symbolic, visual, and many-solution --, our approach requires a fraction of training time of other hybrid methods. On a visual Min-Cut/Max-cut task, it optimizes the regret better than a Decision-Focused-Learning regret-dedicated loss. Finally, it efficiently learns the energy optimization formulation of the large real-world problem of designing proteins.