🤖 AI Summary
Existing image-text paired datasets employ overly concise captions, resulting in insufficient fine-grained semantic alignment between textual descriptions and vector-quantized (VQ) codebooks. To address this, we propose TA-VQ: a Text-Augmented VQ framework that first leverages vision-language models to generate rich, descriptive long-text captions; then introduces a three-level (word/phrase/sentence) multi-granularity text encoder coupled with a hierarchical sampling-based alignment mechanism to enable precise cross-modal matching between the VQ codebook and long-text representations. TA-VQ pioneers a plug-and-play, multi-level alignment architecture—fully compatible with off-the-shelf VQ backbones and requiring no modification to the original VQ pipeline for end-to-end integration. Experiments demonstrate that TA-VQ significantly outperforms state-of-the-art methods on image reconstruction and multiple downstream tasks, validating substantial improvements in both the semantic expressiveness of the codebook and its cross-modal generalization capability under long-text guidance.
📝 Abstract
Image quantization is a crucial technique in image generation, aimed at learning a codebook that encodes an image into a discrete token sequence. Recent advancements have seen researchers exploring learning multi-modal codebook (i.e., text-aligned codebook) by utilizing image caption semantics, aiming to enhance codebook performance in cross-modal tasks. However, existing image-text paired datasets exhibit a notable flaw in that the text descriptions tend to be overly concise, failing to adequately describe the images and provide sufficient semantic knowledge, resulting in limited alignment of text and codebook at a fine-grained level. In this paper, we propose a novel Text-Augmented Codebook Learning framework, named TA-VQ, which generates longer text for each image using the visual-language model for improved text-aligned codebook learning. However, the long text presents two key challenges: how to encode text and how to align codebook and text. To tackle two challenges, we propose to split the long text into multiple granularities for encoding, i.e., word, phrase, and sentence, so that the long text can be fully encoded without losing any key semantic knowledge. Following this, a hierarchical encoder and novel sampling-based alignment strategy are designed to achieve fine-grained codebook-text alignment. Additionally, our method can be seamlessly integrated into existing VQ models. Extensive experiments in reconstruction and various downstream tasks demonstrate its effectiveness compared to previous state-of-the-art approaches.