T-JEPA: Augmentation-Free Self-Supervised Learning for Tabular Data

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Designing effective data augmentations for self-supervised learning on tabular data remains a fundamental challenge due to the absence of natural, semantically preserving transformations. Method: This paper proposes T-JEPA—a data augmentation-free joint-embedding predictive architecture—that learns discriminative representations by performing mutual prediction among latent embeddings of disjoint feature groups. Contribution/Results: T-JEPA establishes the first augmentation-agnostic JEPA paradigm for structured data, incorporating a regularized token mechanism to ensure training stability and implicitly discovering unsupervised feature importance. On diverse classification and regression benchmarks, T-JEPA significantly outperforms supervised baselines trained directly on raw features and, in several cases, matches or surpasses gradient-boosted trees. Interpretability analysis confirms that T-JEPA automatically attends to downstream-relevant features, demonstrating inherent feature selection capability without explicit supervision.

Technology Category

Application Category

📝 Abstract
Self-supervision is often used for pre-training to foster performance on a downstream task by constructing meaningful representations of samples. Self-supervised learning (SSL) generally involves generating different views of the same sample and thus requires data augmentations that are challenging to construct for tabular data. This constitutes one of the main challenges of self-supervision for structured data. In the present work, we propose a novel augmentation-free SSL method for tabular data. Our approach, T-JEPA, relies on a Joint Embedding Predictive Architecture (JEPA) and is akin to mask reconstruction in the latent space. It involves predicting the latent representation of one subset of features from the latent representation of a different subset within the same sample, thereby learning rich representations without augmentations. We use our method as a pre-training technique and train several deep classifiers on the obtained representation. Our experimental results demonstrate a substantial improvement in both classification and regression tasks, outperforming models trained directly on samples in their original data space. Moreover, T-JEPA enables some methods to consistently outperform or match the performance of traditional methods likes Gradient Boosted Decision Trees. To understand why, we extensively characterize the obtained representations and show that T-JEPA effectively identifies relevant features for downstream tasks without access to the labels. Additionally, we introduce regularization tokens, a novel regularization method critical for training of JEPA-based models on structured data.
Problem

Research questions and friction points this paper is trying to address.

Develops augmentation-free SSL for tabular data
Predicts latent features within samples without augmentations
Improves downstream task performance vs traditional methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmentation-free SSL for tabular data
Latent space feature subset prediction
Regularization tokens for structured data
🔎 Similar Papers
No similar papers found.
H
Hugo Thimonier
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire Interdisciplinaire des Sciences du Numérique, 91190, Gif-sur-Yvette, France; Emobot, France.
J
José Lucas De Melo Costa
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire Interdisciplinaire des Sciences du Numérique, 91190, Gif-sur-Yvette, France.
F
Fabrice Popineau
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire Interdisciplinaire des Sciences du Numérique, 91190, Gif-sur-Yvette, France.
A
Arpad Rimmel
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire Interdisciplinaire des Sciences du Numérique, 91190, Gif-sur-Yvette, France.
B
Bich-Liên Doan
Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire Interdisciplinaire des Sciences du Numérique, 91190, Gif-sur-Yvette, France.