🤖 AI Summary
This work addresses the open-set challenge in generalized category discovery (GCD), where only a subset of known classes is labeled and unknown categories must be identified. To tackle this problem, we propose the SSR²-GCD framework, which introduces semi-supervised rate reduction into GCD for the first time. By optimizing multimodal representation learning, our method enhances intra-modal alignment to construct structured feature distributions and leverages the prompt candidate mechanism of vision-language models (VLMs) to strengthen cross-modal knowledge transfer. Extensive experiments on both generic and fine-grained benchmark datasets demonstrate that SSR²-GCD significantly outperforms existing approaches, achieving state-of-the-art performance.
📝 Abstract
Generalized Category Discovery (GCD) aims to identify both known and unknown categories, with only partial labels given for the known categories, posing a challenging open-set recognition problem. State-of-the-art approaches for GCD task are usually built on multi-modality representation learning, which is heavily dependent upon inter-modality alignment. However, few of them cast a proper intra-modality alignment to generate a desired underlying structure of representation distributions. In this paper, we propose a novel and effective multi-modal representation learning framework for GCD via Semi-Supervised Rate Reduction, called SSR$^2$-GCD, to learn cross-modality representations with desired structural properties based on emphasizing to properly align intra-modality relationships. Moreover, to boost knowledge transfer, we integrate prompt candidates by leveraging the inter-modal alignment offered by Vision Language Models. We conduct extensive experiments on generic and fine-grained benchmark datasets demonstrating superior performance of our approach.