Complexity Scaling Laws for Neural Models using Combinatorial Optimization

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses neural combinatorial optimization, focusing on problem-intrinsic complexity—not computational resources—as the primary scaling variable. Method: Using the Traveling Salesman Problem (TSP) as a canonical benchmark, we quantitatively model two fundamental complexity measures: solution-space size and representation-space size. We combine combinatorial analysis with empirical studies across reinforcement learning, supervised fine-tuning, and local-search gradient descent, analyzing cost-landscape dynamics and generalization behavior. Contribution/Results: We discover that suboptimality scales as a power law with respect to node count or embedding dimension—robustly across training paradigms. Critically, gradient-based local search on the cost landscape reproduces identical scaling, confirming that the law arises from problem structure rather than algorithmic choices. This yields the first smooth, interpretable, problem-complexity-driven scaling theory for neural combinatorial optimization—providing a principled framework for predicting and understanding performance limits in learned solvers.

Technology Category

Application Category

📝 Abstract
Recent work on neural scaling laws demonstrates that model performance scales predictably with compute budget, model size, and dataset size. In this work, we develop scaling laws based on problem complexity. We analyze two fundamental complexity measures: solution space size and representation space size. Using the Traveling Salesman Problem (TSP) as a case study, we show that combinatorial optimization promotes smooth cost trends, and therefore meaningful scaling laws can be obtained even in the absence of an interpretable loss. We then show that suboptimality grows predictably for fixed-size models when scaling the number of TSP nodes or spatial dimensions, independent of whether the model was trained with reinforcement learning or supervised fine-tuning on a static dataset. We conclude with an analogy to problem complexity scaling in local search, showing that a much simpler gradient descent of the cost landscape produces similar trends.
Problem

Research questions and friction points this paper is trying to address.

Develop scaling laws based on problem complexity measures
Analyze solution and representation space size in neural models
Study suboptimality growth in fixed-size models for TSP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaling laws based on problem complexity measures
Combinatorial optimization ensures smooth cost trends
Suboptimality grows predictably with scaling parameters
🔎 Similar Papers
No similar papers found.