🤖 AI Summary
Zero-shot compositional image retrieval (ZS CIR) faces challenges including weak semantic representation of pseudo-word tokens, training-inference inconsistency, and excessive reliance on large-scale synthetic data. This paper proposes a novel “Map-to-Compose” two-stage decoupled framework: (1) a vision-semantic injection stage that enhances fine-grained semantic alignment from images to pseudo-word tokens; and (2) a lightweight fine-tuning stage of the text encoder to achieve soft alignment and fusion between pseudo-words and modification texts. Our method eliminates the need for large-scale synthetic data and generalizes effectively across both high- and low-quality triplets. It achieves significant improvements over state-of-the-art methods on three standard benchmarks, with substantial performance gains attained using only a small number of synthetic samples. Key contributions include the first decoupled two-stage training paradigm for ZS CIR, the vision-semantic injection mechanism, and a novel soft text-alignment objective function.
📝 Abstract
Composed Image Retrieval (CIR) is a challenging multimodal task that retrieves a target image based on a reference image and accompanying modification text. Due to the high cost of annotating CIR triplet datasets, zero-shot (ZS) CIR has gained traction as a promising alternative. Existing studies mainly focus on projection-based methods, which map an image to a single pseudo-word token. However, these methods face three critical challenges: (1) insufficient pseudo-word token representation capacity, (2) discrepancies between training and inference phases, and (3) reliance on large-scale synthetic data. To address these issues, we propose a two-stage framework where the training is accomplished from mapping to composing. In the first stage, we enhance image-to-pseudo-word token learning by introducing a visual semantic injection module and a soft text alignment objective, enabling the token to capture richer and fine-grained image information. In the second stage, we optimize the text encoder using a small amount of synthetic triplet data, enabling it to effectively extract compositional semantics by combining pseudo-word tokens with modification text for accurate target image retrieval. The strong visual-to-pseudo mapping established in the first stage provides a solid foundation for the second stage, making our approach compatible with both high- and low-quality synthetic data, and capable of achieving significant performance gains with only a small amount of synthetic data. Extensive experiments were conducted on three public datasets, achieving superior performance compared to existing approaches.