Adaptation and Fine-tuning with TabPFN for Travelling Salesman Problem

๐Ÿ“… 2025-11-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the computational inefficiency of exact/heuristic algorithms and the lengthy training and poor generalization of existing learning-based approaches for the Traveling Salesman Problem (TSP). We introduce TabPFNโ€”a lightweight, table-based foundation modelโ€”into combinatorial optimization for the first time. Our method adopts a node-prediction paradigm for path construction, enabling end-to-end TSP solving via single-sample adaptation and fine-tuning, without post-processing. Trained in minutes, it generalizes across problem scales (20โ€“500 nodes) and matches the performance of state-of-the-art models requiring complex post-processing, while drastically reducing data and computational requirements. Key contributions include: (i) the first application of TabPFN to combinatorial optimization; (ii) a novel node-level modeling strategy; and (iii) an efficient solving framework featuring strong generalization, minimal performance degradation across scales, and zero post-processing overhead.

Technology Category

Application Category

๐Ÿ“ Abstract
Tabular Prior-Data Fitted Network (TabPFN) is a foundation model designed for small to medium-sized tabular data, which has attracted much attention recently. This paper investigates the application of TabPFN in Combinatorial Optimization (CO) problems. The aim is to lessen challenges in time and data-intensive training requirements often observed in using traditional methods including exact and heuristic algorithms, Machine Learning (ML)-based models, to solve CO problems. Proposing possibly the first ever application of TabPFN for such a purpose, we adapt and fine-tune the TabPFN model to solve the Travelling Salesman Problem (TSP), one of the most well-known CO problems. Specifically, we adopt the node-based approach and the node-predicting adaptation strategy to construct the entire TSP route. Our evaluation with varying instance sizes confirms that TabPFN requires minimal training, adapts to TSP using a single sample, performs better generalization across varying TSP instance sizes, and reduces performance degradation. Furthermore, the training process with adaptation and fine-tuning is completed within minutes. The methodology leads to strong solution quality even without post-processing and achieves performance comparable to other models with post-processing refinement. Our findings suggest that the TabPFN model is a promising approach to solve structured and CO problems efficiently under training resource constraints and rapid deployment requirements.
Problem

Research questions and friction points this paper is trying to address.

Applying TabPFN to solve Travelling Salesman Problem efficiently
Reducing time and data-intensive training in combinatorial optimization
Achieving strong generalization across varying TSP instance sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts TabPFN foundation model for combinatorial optimization
Uses node-based adaptation strategy for TSP routes
Enables minimal training with single-sample fine-tuning
๐Ÿ”Ž Similar Papers
No similar papers found.
N
Nguyen Gia Hien Vu
Product Design and Optimization Laboratory, Simon Fraser University, Surrey, BC , Canada
Yifan Tang
Yifan Tang
SF Motors Inc
R
Rey Lim
Simon Fraser University Alumnus , Burnaby, BC, Canada
Y
Yifan Yang
School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
Hang Ma
Hang Ma
Assistant Professor of Computing Science, Simon Fraser University
Artificial IntelligenceMulti-Robot SystemsAutomated PlanningHeuristic Search
K
Ke Wang
Product Design and Optimization Laboratory, Simon Fraser University, Surrey, BC , Canada
G
G. G. Wang
Product Design and Optimization Laboratory, Simon Fraser University, Surrey, BC , Canada