π€ AI Summary
This work identifies a previously unrecognized property of Joint Embedding Predictive Architectures (JEPAs): their collapse-avoidance term implicitly learns data density during training. Leveraging this insight, we theoretically establish that the anti-collapse mechanism induces a closed-form estimator of sample density. Based on this, we propose JEPA-SCOREβa method that, for the first time, efficiently extracts sample-wise probability estimates from pretrained JEPA models (e.g., I-JEPA, DINOv2, MetaCLIP) without additional training or parameters. JEPA-SCORE models density via spectral properties of the modelβs input Jacobian at each sample. Extensive evaluation on synthetic data, controlled benchmarks, and ImageNet demonstrates that JEPA-SCORE achieves consistently high-accuracy density estimation across diverse JEPA architectures. It significantly outperforms existing unsupervised baselines in downstream tasks including data pruning and anomaly detection.
π Abstract
Joint Embedding Predictive Architectures (JEPAs) learn representations able to solve numerous downstream tasks out-of-the-box. JEPAs combine two objectives: (i) a latent-space prediction term, i.e., the representation of a slightly perturbed sample must be predictable from the original sample's representation, and (ii) an anti-collapse term, i.e., not all samples should have the same representation. While (ii) is often considered as an obvious remedy to representation collapse, we uncover that JEPAs' anti-collapse term does much more--it provably estimates the data density. In short, any successfully trained JEPA can be used to get sample probabilities, e.g., for data curation, outlier detection, or simply for density estimation. Our theoretical finding is agnostic of the dataset and architecture used--in any case one can compute the learned probabilities of sample $x$ efficiently and in closed-form using the model's Jacobian matrix at $x$. Our findings are empirically validated across datasets (synthetic, controlled, and Imagenet) and across different Self Supervised Learning methods falling under the JEPA family (I-JEPA and DINOv2) and on multimodal models, such as MetaCLIP. We denote the method extracting the JEPA learned density as {f JEPA-SCORE}.