FOR: Finetuning for Object Level Open Vocabulary Image Retrieval

📅 2024-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of fine-grained object localization and weak low-resource semi-supervised performance in open-vocabulary image retrieval, this paper proposes an object-centric fine-tuning framework. It introduces a lightweight CLIP decoder and jointly optimizes contrastive learning, detection-style localization supervision, and multi-objective objectives—preserving open-vocabulary generalization under closed-set labels. For the first time, the framework demonstrates strong robustness in extremely low-label-rate semi-supervised settings (e.g., 1%–5% annotation). On three standard benchmarks, it achieves an average +8.0 mAP@50 improvement over prior state-of-the-art methods. The core innovation lies in breaking CLIP’s zero-adaptation paradigm: enabling efficient, target-level fine-grained retrieval while balancing accuracy, cross-vocabulary generalization, and annotation efficiency.

Technology Category

Application Category

📝 Abstract
As working with large datasets becomes standard, the task of accurately retrieving images containing objects of interest by an open set textual query gains practical importance. The current leading approach utilizes a pre-trained CLIP model without any adaptation to the target domain, balancing accuracy and efficiency through additional post-processing. In this work, we propose FOR: Finetuning for Object-centric Open-vocabulary Image Retrieval, which allows finetuning on a target dataset using closed-set labels while keeping the visual-language association crucial for open vocabulary retrieval. FOR is based on two design elements: a specialized decoder variant of the CLIP head customized for the intended task, and its coupling within a multi-objective training framework. Together, these design choices result in a significant increase in accuracy, showcasing improvements of up to 8 mAP@50 points over SoTA across three datasets. Additionally, we demonstrate that FOR is also effective in a semi-supervised setting, achieving impressive results even when only a small portion of the dataset is labeled.
Problem

Research questions and friction points this paper is trying to address.

Text-based Image Retrieval
Semi-supervised Learning
Open Vocabulary Search
Innovation

Methods, ideas, or system contributions that make the work stand out.

FOR method
Multi-objective training framework
Semi-supervised learning
🔎 Similar Papers
No similar papers found.