Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-modal contrastive models like CLIP exhibit suboptimal performance on unimodal tasks (e.g., image–image retrieval) due to the absence of intra-modal constraints, leading to “intra-modal misalignment”—inconsistent embedding distributions among samples within the same modality. Method: We first systematically define and empirically validate this phenomenon. We propose unsupervised modal inversion—a bidirectional (image↔text) optimization technique requiring no additional data or adapters—that explicitly exposes and rectifies feature-space inconsistencies. Crucially, we demonstrate that leveraging cross-modal pathways significantly enhances unimodal retrieval, challenging the conventional paradigm that unimodal tasks necessitate unimodal models. Contribution/Results: Our approach outperforms native unimodal baselines across 15+ unimodal retrieval benchmarks. Ablation studies further confirm that artificially restricting cross-modal models to unimodal usage degrades zero-shot classification performance. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Pre-trained multi-modal Vision-Language Models like CLIP are widely used off-the-shelf for a variety of applications. In this paper, we show that the common practice of individually exploiting the text or image encoders of these powerful multi-modal models is highly suboptimal for intra-modal tasks like image-to-image retrieval. We argue that this is inherently due to the CLIP-style inter-modal contrastive loss that does not enforce any intra-modal constraints, leading to what we call intra-modal misalignment. To demonstrate this, we leverage two optimization-based modality inversion techniques that map representations from their input modality to the complementary one without any need for auxiliary data or additional trained adapters. We empirically show that, in the intra-modal tasks of image-to-image and text-to-text retrieval, approaching these tasks inter-modally significantly improves performance with respect to intra-modal baselines on more than fifteen datasets. Additionally, we demonstrate that approaching a native inter-modal task (e.g. zero-shot image classification) intra-modally decreases performance, further validating our findings. Finally, we show that incorporating an intra-modal term in the pre-training objective or narrowing the modality gap between the text and image feature embedding spaces helps reduce the intra-modal misalignment. The code is publicly available at: https://github.com/miccunifi/Cross-the-Gap.
Problem

Research questions and friction points this paper is trying to address.

Identify intra-modal misalignment in CLIP
Improve intra-modal retrieval performance
Address modality gap in feature embedding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality inversion techniques
Intra-modal term integration
Reduced modality gap
🔎 Similar Papers
No similar papers found.