Efficient Neuro-Symbolic Learning of Constraints and Objective

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models struggle with NP-hard discrete reasoning and combinatorial optimization problems due to their inherent limitations in handling hard logical constraints and discrete search spaces. Method: This paper proposes a differentiable neural-symbolic architecture that tightly integrates symbolic constraint modeling with neural representation learning. A novel probabilistic loss function enables joint end-to-end learning of objectives and constraints, while crucially decoupling the combinatorial solver from the training loop—using it only during inference to preserve interpretability, training efficiency, and solution accuracy. Contribution/Results: Experiments demonstrate significantly accelerated training on Sudoku and vision-based Min-Cut/Max-Cut tasks; in protein energy optimization design, the method achieves efficient, verifiably exact solutions. By eliminating solver involvement in backpropagation and retaining it solely for constrained decoding, our approach establishes a new practical paradigm for neural-symbolic systems in complex constrained optimization.

Technology Category

Application Category

📝 Abstract
In the ongoing quest for hybridizing discrete reasoning with neural nets, there is an increasing interest in neural architectures that can learn how to solve discrete reasoning or optimization problems from natural inputs, a task that Large Language Models seem to struggle with. Objectives: We introduce a differentiable neuro-symbolic architecture and a loss function dedicated to learning how to solve NP-hard reasoning problems. Methods: Our new probabilistic loss allows for learning both the constraints and the objective, thus delivering a complete model that can be scrutinized and completed with side constraints. By pushing the combinatorial solver out of the training loop, our architecture also offers scalable training while exact inference gives access to maximum accuracy. Results: We empirically show that it can efficiently learn how to solve NP-hard reasoning problems from natural inputs. On three variants of the Sudoku benchmark -- symbolic, visual, and many-solution --, our approach requires a fraction of training time of other hybrid methods. On a visual Min-Cut/Max-cut task, it optimizes the regret better than a Decision-Focused-Learning regret-dedicated loss. Finally, it efficiently learns the energy optimization formulation of the large real-world problem of designing proteins.
Problem

Research questions and friction points this paper is trying to address.

Learning NP-hard reasoning problems from natural inputs
Differentiable neuro-symbolic architecture for constraint and objective learning
Scalable training with exact inference for maximum accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable neuro-symbolic architecture for NP-hard problems
Probabilistic loss learning constraints and objective
Scalable training with exact inference capability
🔎 Similar Papers
No similar papers found.
M
Marianne Defresne
Department of Computer Science, KU Leuven, Belgium
R
Romain Gambardella
Télécom-Paris, France
S
Sophie Barbe
TBI, Université de Toulouse, CNRS, INRAE, INSA, ANITI, France
Thomas Schiex
Thomas Schiex
Université de Toulouse, ANITI, INRAE, Toulouse, France
Artificial IntelligenceBioinformaticsConstraint ProgrammingGraphical ModelsProtein Design