🤖 AI Summary
Vision-language models (VLMs) exhibit limited compositional reasoning capability due to scarcity of high-quality, structured image-text pairs.
Method: We propose a counterfactual data augmentation framework that leverages large language model–guided block-wise diffusion to automatically generate controllable, high-fidelity, and diverse image-text counterfactuals without human annotation. To enforce fine-grained alignment, we model visual input as a “puzzle” structure adhering to compositional rules, enabling precise correspondence between image patches and textual descriptions. We further introduce a contrastive loss distinguishing inter- and intra-set samples to strengthen compositional semantic alignment.
Contribution/Results: Our method achieves state-of-the-art performance on multiple compositional generalization benchmarks. Notably, it attains significant gains in reasoning accuracy using substantially less training data, empirically validating the critical role of counterfactual data in enhancing VLMs’ compositional generalization capacity.
📝 Abstract
Vision-language models (VLMs) often struggle with compositional reasoning due to insufficient high-quality image-text data. To tackle this challenge, we propose a novel block-based diffusion approach that automatically generates counterfactual datasets without manual annotation. Our method utilizes large language models to identify entities and their spatial relationships. It then independently generates image blocks as "puzzle pieces" coherently arranged according to specified compositional rules. This process creates diverse, high-fidelity counterfactual image-text pairs with precisely controlled variations. In addition, we introduce a specialized loss function that differentiates inter-set from intra-set samples, enhancing training efficiency and reducing the need for negative samples. Experiments demonstrate that fine-tuning VLMs with our counterfactual datasets significantly improves visual reasoning performance. Our approach achieves state-of-the-art results across multiple benchmarks while using substantially less training data than existing methods.