A Visual Leap in CLIP Compositionality Reasoning through Generation of Counterfactual Sets

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) exhibit limited compositional reasoning capability due to scarcity of high-quality, structured image-text pairs. Method: We propose a counterfactual data augmentation framework that leverages large language model–guided block-wise diffusion to automatically generate controllable, high-fidelity, and diverse image-text counterfactuals without human annotation. To enforce fine-grained alignment, we model visual input as a “puzzle” structure adhering to compositional rules, enabling precise correspondence between image patches and textual descriptions. We further introduce a contrastive loss distinguishing inter- and intra-set samples to strengthen compositional semantic alignment. Contribution/Results: Our method achieves state-of-the-art performance on multiple compositional generalization benchmarks. Notably, it attains significant gains in reasoning accuracy using substantially less training data, empirically validating the critical role of counterfactual data in enhancing VLMs’ compositional generalization capacity.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) often struggle with compositional reasoning due to insufficient high-quality image-text data. To tackle this challenge, we propose a novel block-based diffusion approach that automatically generates counterfactual datasets without manual annotation. Our method utilizes large language models to identify entities and their spatial relationships. It then independently generates image blocks as "puzzle pieces" coherently arranged according to specified compositional rules. This process creates diverse, high-fidelity counterfactual image-text pairs with precisely controlled variations. In addition, we introduce a specialized loss function that differentiates inter-set from intra-set samples, enhancing training efficiency and reducing the need for negative samples. Experiments demonstrate that fine-tuning VLMs with our counterfactual datasets significantly improves visual reasoning performance. Our approach achieves state-of-the-art results across multiple benchmarks while using substantially less training data than existing methods.
Problem

Research questions and friction points this paper is trying to address.

Improve CLIP model compositional reasoning with counterfactual data
Generate high-quality image-text pairs without manual annotation
Enhance training efficiency with specialized loss function
Innovation

Methods, ideas, or system contributions that make the work stand out.

Block-based diffusion for counterfactual dataset generation
LLM-guided entity and spatial relationship identification
Specialized loss function for efficient contrastive learning
🔎 Similar Papers
No similar papers found.
Z
Zexi Jia
WeChat AI, Tencent Inc, China
C
Chuanwei Huang
Institute for Artificial Intelligence, Peking University
Hongyan Fei
Hongyan Fei
Peking University
computer visionbiometrics
Yeshuang Zhu
Yeshuang Zhu
WeChat - Basic Architecture Dept., Tencent Inc.
natural language processingimage/video generationhuman-computer interaction
Zhiqiang Yuan
Zhiqiang Yuan
fudan university
Y
Ying Deng
WeChat AI, Tencent Inc, China
J
Jiapei Zhang
WeChat AI, Tencent Inc, China
Jinchao Zhang
Jinchao Zhang
WeChat AI - Pattern Recognition Center
Deep LearningNatural Language ProcessingMachine TranslationDialogue System
J
Jie Zhou
WeChat AI, Tencent Inc, China