Local-Global Prompt Learning via Sparse Optimal Transport

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of redundant and overlapping local prompts in few-shot vision-language models by proposing a fine-grained alignment method based on sparse optimal transport. It constructs category-conditional sparse sets of image patches through visual-visual attention and incorporates entropy-regularized optimal transport to explicitly assign salient regions to corresponding class-specific prompts. This approach enables soft partitioning of local regions, mitigating prompt collapse while preserving the geometric structure of the CLIP feature space—without requiring learnable projections. Evaluated under the 16-shot setting across 11 standard benchmarks, the method achieves an average accuracy of 85.1% and sets a new state-of-the-art in out-of-distribution detection with an AUC of 94.2%.

Technology Category

Application Category

📝 Abstract
Few-shot adaptation of vision-language models (VLMs) like CLIP typically relies on learning textual prompts matched to global image embeddings. Recent works extend this paradigm by incorporating local image-text alignment to capture fine-grained visual cues, yet these approaches often select local regions independently for each prompt, leading to redundant local feature usage and prompt overlap. We propose SOT-GLP, which introduces a shared sparse patch support and balanced optimal transport allocation to explicitly partition salient visual regions among class-specific local prompts while preserving global alignment. Our method learns shared global prompts and class-specific local prompts. The global branch maintains standard image-text matching for robust category-level alignment. The local branch constructs a class-conditioned sparse patch set using V-V attention and aligns it to multiple class-specific prompts via balanced entropic optimal transport, yielding a soft partition of patches that prevents prompt overlap and collapse. We evaluate our method on two complementary objectives: (i) few-shot classification accuracy on 11 standard benchmarks and (ii) out-of-distribution (OOD) detection. On the standard 11-dataset benchmark with 16-shot ViT-B/16, SOT-GLP achieves 85.1% average accuracy, outperforming prior prompt-learning methods. We identify a distinct accuracy-robustness trade-off in prompt learning: while learnable projections optimize in-distribution fit, they alter the foundational feature space. We demonstrate that a projection-free local alignment preserves the native geometry of the CLIP manifold, yielding state-of-the-art OOD detection performance (94.2% AUC) that surpasses fully adapted models. Implementation available at: https://github.com/Deniz2304988/SOT-GLP
Problem

Research questions and friction points this paper is trying to address.

few-shot adaptation
vision-language models
local-global alignment
prompt overlap
redundant local features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Optimal Transport
Prompt Learning
Local-Global Alignment
Few-shot Adaptation
Out-of-Distribution Detection
🔎 Similar Papers
No similar papers found.
D
Deniz Kizaroğlu
Graduate School of Informatics, Middle East Technical University, Ankara, Turkey
Ü
Ülku Tuncer Küçüktas
Department of Electrical and Electronics Engineering, Gazi University, Ankara, Turkey
E
Emre Çakmakyurdu
Graduate School of Informatics, Middle East Technical University, Ankara, Turkey
Alptekin Temizel
Alptekin Temizel
Middle East Technical University, Ankara, Turkey
video surveillancecomputer visionmachine learningdeep learningGPU programming