Gaussian Embeddings: How JEPAs Secretly Learn Your Data Density

πŸ“… 2025-10-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work identifies a previously unrecognized property of Joint Embedding Predictive Architectures (JEPAs): their collapse-avoidance term implicitly learns data density during training. Leveraging this insight, we theoretically establish that the anti-collapse mechanism induces a closed-form estimator of sample density. Based on this, we propose JEPA-SCOREβ€”a method that, for the first time, efficiently extracts sample-wise probability estimates from pretrained JEPA models (e.g., I-JEPA, DINOv2, MetaCLIP) without additional training or parameters. JEPA-SCORE models density via spectral properties of the model’s input Jacobian at each sample. Extensive evaluation on synthetic data, controlled benchmarks, and ImageNet demonstrates that JEPA-SCORE achieves consistently high-accuracy density estimation across diverse JEPA architectures. It significantly outperforms existing unsupervised baselines in downstream tasks including data pruning and anomaly detection.

Technology Category

Application Category

πŸ“ Abstract
Joint Embedding Predictive Architectures (JEPAs) learn representations able to solve numerous downstream tasks out-of-the-box. JEPAs combine two objectives: (i) a latent-space prediction term, i.e., the representation of a slightly perturbed sample must be predictable from the original sample's representation, and (ii) an anti-collapse term, i.e., not all samples should have the same representation. While (ii) is often considered as an obvious remedy to representation collapse, we uncover that JEPAs' anti-collapse term does much more--it provably estimates the data density. In short, any successfully trained JEPA can be used to get sample probabilities, e.g., for data curation, outlier detection, or simply for density estimation. Our theoretical finding is agnostic of the dataset and architecture used--in any case one can compute the learned probabilities of sample $x$ efficiently and in closed-form using the model's Jacobian matrix at $x$. Our findings are empirically validated across datasets (synthetic, controlled, and Imagenet) and across different Self Supervised Learning methods falling under the JEPA family (I-JEPA and DINOv2) and on multimodal models, such as MetaCLIP. We denote the method extracting the JEPA learned density as {f JEPA-SCORE}.
Problem

Research questions and friction points this paper is trying to address.

JEPAs estimate data density through anti-collapse terms
Extract sample probabilities for outlier detection and curation
Compute learned densities efficiently using Jacobian matrices
Innovation

Methods, ideas, or system contributions that make the work stand out.

JEPAs combine latent prediction and anti-collapse objectives
Anti-collapse term estimates data density using Jacobian matrix
Method extracts learned density as JEPA-SCORE for applications