🤖 AI Summary
This paper addresses zero-shot compositional image retrieval (ZS-CIR)—retrieving semantically consistent target images that satisfy visual modifications described by relative natural language prompts (e.g., “more formal”, “with stripes”) given a reference image, without access to labeled training data. We formally define the ZS-CIR task for the first time. To solve it, we propose iSEARLE, a method that maps the reference image into CLIP’s text embedding space via learnable pseudo-tokens, enabling cross-modal alignment and zero-shot prompt optimization. Furthermore, we introduce CIRCO, the first open-domain, multi-annotation ZS-CIR benchmark. Extensive experiments demonstrate that iSEARLE achieves state-of-the-art performance on FashionIQ, CIRR, and CIRCO. It also generalizes effectively to novel settings—including cross-domain attribute transfer and object composition—outperforming prior methods. All code, models, and datasets are publicly released.
📝 Abstract
Given a query consisting of a reference image and a relative caption, Composed Image Retrieval (CIR) aims to retrieve target images visually similar to the reference one while incorporating the changes specified in the relative caption. The reliance of supervised methods on labor-intensive manually labeled datasets hinders their broad applicability. In this work, we introduce a new task, Zero-Shot CIR (ZS-CIR), that addresses CIR without the need for a labeled training dataset. We propose an approach named iSEARLE (improved zero-Shot composEd imAge Retrieval with textuaL invErsion) that involves mapping the visual information of the reference image into a pseudo-word token in CLIP token embedding space and combining it with the relative caption. To foster research on ZS-CIR, we present an open-domain benchmarking dataset named CIRCO (Composed Image Retrieval on Common Objects in context), the first CIR dataset where each query is labeled with multiple ground truths and a semantic categorization. The experimental results illustrate that iSEARLE obtains state-of-the-art performance on three different CIR datasets -- FashionIQ, CIRR, and the proposed CIRCO -- and two additional evaluation settings, namely domain conversion and object composition. The dataset, the code, and the model are publicly available at https://github.com/miccunifi/SEARLE.