Geometric Algorithms for Neural Combinatorial Optimization with Constraints

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
End-to-end training of neural networks for combinatorial optimization remains challenging due to discrete constraints that impede gradient flow and violate differentiability requirements. Method: This paper introduces the first self-supervised framework that deeply integrates convex geometry theory into neural optimization. Leveraging Carathéodory’s theorem, it designs a differentiable output decomposition mechanism that explicitly expresses the network’s continuous output as a convex combination of feasible solutions, thereby unifying constraint satisfaction with gradient propagation. The framework natively supports canonical discrete constraint structures—including cardinality constraints, graph independent sets, and matroids. Results: Experiments demonstrate substantial improvements over state-of-the-art neural baselines across diverse constrained optimization tasks. The method achieves significant advances in three critical dimensions: solution feasibility, solution quality, and cross-problem generalization.

Technology Category

Application Category

📝 Abstract
Self-Supervised Learning (SSL) for Combinatorial Optimization (CO) is an emerging paradigm for solving combinatorial problems using neural networks. In this paper, we address a central challenge of SSL for CO: solving problems with discrete constraints. We design an end-to-end differentiable framework that enables us to solve discrete constrained optimization problems with neural networks. Concretely, we leverage algorithmic techniques from the literature on convex geometry and Carathéodory's theorem to decompose neural network outputs into convex combinations of polytope corners that correspond to feasible sets. This decomposition-based approach enables self-supervised training but also ensures efficient quality-preserving rounding of the neural net output into feasible solutions. Extensive experiments in cardinality-constrained optimization show that our approach can consistently outperform neural baselines. We further provide worked-out examples of how our method can be applied beyond cardinality-constrained problems to a diverse set of combinatorial optimization tasks, including finding independent sets in graphs, and solving matroid-constrained problems.
Problem

Research questions and friction points this paper is trying to address.

Solving discrete constrained optimization problems with neural networks
Decomposing neural outputs into feasible polytope corner combinations
Outperforming baselines in cardinality-constrained and graph optimization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable framework for discrete constrained optimization
Decomposes outputs into convex polytope combinations
Ensures feasible solutions via quality-preserving rounding
🔎 Similar Papers
No similar papers found.