🤖 AI Summary
This study investigates CLIP’s compositional generalization capability on real-world data for unseen object compositions and its underlying mechanisms, specifically addressing whether pretraining frequency predicts compositional retrieval performance.
Method: We introduce a novel-composition benchmark tailored for retrieval tasks, integrating frequency statistics with performance attribution modeling to analyze CLIP’s internal representations.
Contribution/Results: We provide the first empirical evidence that CLIP implicitly learns disentangled object representations amenable to linear recombination. Crucially, compositional retrieval accuracy is highly predictable (R² > 0.9) from the independent pretraining frequencies of constituent objects. We further demonstrate that compositional generalization improves systematically with pretraining data scale, and propose a frequency-balanced data allocation strategy—maintaining fixed dataset size—that significantly enhances retrieval accuracy and generalization efficiency for rare compositions.
📝 Abstract
We investigate the success conditions for compositional generalization of CLIP models on real-world data through performance prediction. Prior work shows that CLIP requires exponentially more pretraining data for linear performance gains on individual concepts. This sample-inefficient scaling could be mitigated if CLIP systematically understood new inputs as compositions of learned components, allowing rare observation to be mapped to common concepts. To explore CLIP's compositional generalization ability, we filter retrieval corpora for samples with object combinations not present in the pretraining corpus. We show that CLIP's performance on these samples can be accurately predicted from the pretraining frequencies of individual objects. Our findings demonstrate that CLIP learns to disentangle objects observed in its pretraining data and can recompose them straightforwardly. Additionally, we are the first to show how this ability scales with pretraining data. For data curation in practice, our results suggest that balancing object occurrences improves generalization, which should benefit CLIP's efficiency and accuracy without scaling data volume.