🤖 AI Summary
In language-assisted image clustering (LaIC), selecting semantically matching positive nouns from unlabeled, open-domain text corpora remains a fundamental challenge.
Method: This paper proposes GradNorm, the first theoretically grounded, gradient-driven noun filtering framework for LaIC. Built upon CLIP, GradNorm quantifies semantic relevance by measuring the norm of backpropagated gradients—computed via cross-entropy loss—on noun embeddings. We rigorously prove its separability between positive and negative nouns under mild assumptions and show that existing filtering methods are special cases of GradNorm. Crucially, GradNorm requires no additional training or fine-tuning, ensuring both interpretability and computational efficiency.
Results: Evaluated across multiple benchmark datasets, GradNorm significantly improves clustering accuracy, achieving state-of-the-art performance. Empirical results validate its theoretical guarantees and strong cross-domain generalization capability.
📝 Abstract
This paper investigates the recently emerged problem of Language-assisted Image Clustering (LaIC), where textual semantics are leveraged to improve the discriminability of visual representations to facilitate image clustering. Due to the unavailability of true class names, one of core challenges of LaIC lies in how to filter positive nouns, i.e., those semantically close to the images of interest, from unlabeled wild corpus data. Existing filtering strategies are predominantly based on the off-the-shelf feature space learned by CLIP; however, despite being intuitive, these strategies lack a rigorous theoretical foundation. To fill this gap, we propose a novel gradient-based framework, termed as GradNorm, which is theoretically guaranteed and shows strong empirical performance. In particular, we measure the positiveness of each noun based on the magnitude of gradients back-propagated from the cross-entropy between the predicted target distribution and the softmax output. Theoretically, we provide a rigorous error bound to quantify the separability of positive nouns by GradNorm and prove that GradNorm naturally subsumes existing filtering strategies as extremely special cases of itself. Empirically, extensive experiments show that GradNorm achieves the state-of-the-art clustering performance on various benchmarks.