Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations of self-attention mechanisms in tabular classification—namely, inefficient training, sensitivity to initialization, and high computational overhead—by introducing a lightweight modeling paradigm grounded in optimal transport (OT) theory. Methodologically, it employs class-specific Gaussian distributions as geometric anchors, leveraging the Wasserstein distance and Monge gap to quantify and track the evolution of attention projections; an OT-alignment pretraining scheme replaces self-attention modules with lightweight MLPs, eliminating reliance on Transformer architectures. Contributions include: (i) the first application of OT metrics to analyze training dynamics of tabular models; (ii) an attention-free, OT-driven pretraining framework achieving Transformer-comparable accuracy on multi-class and biomedical benchmarks, with significantly reduced training cost, superior scalability under standardized input, and performance that consistently improves with enhanced geometric modeling fidelity.

Technology Category

Application Category

📝 Abstract
This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset. Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R.
Problem

Research questions and friction points this paper is trying to address.

Develops an OT-based alternative to self-attention for tabular classification
Analyzes self-attention training inefficiency using Optimal Transport metrics
Proposes an OT algorithm to reduce computational cost while maintaining accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Transport metrics evaluate self-attention training evolution
OT-based algorithm uses dummy Gaussian distributions for alignment
Method reduces computational cost while matching Transformer accuracy
🔎 Similar Papers
No similar papers found.
Antonio Candelieri
Antonio Candelieri
Associate Professor, University of Milano-Bicocca
Machine LearningBayesian OptimizationData ScienceDecision Support
A
Alessandro Quadrio
Università degli studi di Milano–Bicocca, Scuola di Economia e Statistica