🤖 AI Summary
This work addresses key limitations of self-attention mechanisms in tabular classification—namely, inefficient training, sensitivity to initialization, and high computational overhead—by introducing a lightweight modeling paradigm grounded in optimal transport (OT) theory. Methodologically, it employs class-specific Gaussian distributions as geometric anchors, leveraging the Wasserstein distance and Monge gap to quantify and track the evolution of attention projections; an OT-alignment pretraining scheme replaces self-attention modules with lightweight MLPs, eliminating reliance on Transformer architectures. Contributions include: (i) the first application of OT metrics to analyze training dynamics of tabular models; (ii) an attention-free, OT-driven pretraining framework achieving Transformer-comparable accuracy on multi-class and biomedical benchmarks, with significantly reduced training cost, superior scalability under standardized input, and performance that consistently improves with enhanced geometric modeling fidelity.
📝 Abstract
This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset.
Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R.