ViTNF: Leveraging Neural Fields to Boost Vision Transformers in Generalized Category Discovery

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In generalized category discovery (GCD), Vision Transformer (ViT) classification heads incur high training costs, while the feature extractor’s representational potential remains underexploited. Method: We propose a lightweight classifier based on dual-coupled static neural fields (SNFs), enabling the first explicit decoupling of feature extraction from classification decision-making; and introduce a横向交互 scale-adaptive algorithm that innovatively incorporates neural field modeling into the GCD meta-test phase. Our approach follows a source-task pretraining + plug-and-play meta-test paradigm, fully compatible with standard ViT backbones. Results: On four major benchmarks—including CIFAR-100—the method achieves +19% accuracy on novel classes and +16% on all classes, significantly surpassing current state-of-the-art. Core contributions: (1) the first SNF-based classifier architecture for GCD; (2) the first neural field modeling paradigm tailored to GCD meta-testing; and (3) empirical validation of effective ViT feature decoupling and reuse.

Technology Category

Application Category

📝 Abstract
Generalized category discovery (GCD) is a highly popular task in open-world recognition, aiming to identify unknown class samples using known class data. By leveraging pre-training, meta-training, and fine-tuning, ViT achieves excellent few-shot learning capabilities. Its MLP head is a feedforward network, trained synchronously with the entire network in the same process, increasing the training cost and difficulty without fully leveraging the power of the feature extractor. This paper proposes a new architecture by replacing the MLP head with a neural field-based one. We first present a new static neural field function to describe the activity distribution of the neural field and then use two static neural field functions to build an efficient few-shot classifier. This neural field-based (NF) classifier consists of two coupled static neural fields. It stores the feature information of support samples by its elementary field, the known categories by its high-level field, and the category information of support samples by its cross-field connections. We replace the MLP head with the proposed NF classifier, resulting in a novel architecture ViTNF, and simplify the three-stage training mode by pre-training the feature extractor on source tasks and training the NF classifier with support samples in meta-testing separately, significantly reducing ViT's demand for training samples and the difficulty of model training. To enhance the model's capability in identifying new categories, we provide an effective algorithm to determine the lateral interaction scale of the elementary field. Experimental results demonstrate that our model surpasses existing state-of-the-art methods on CIFAR-100, ImageNet-100, CUB-200, and Standard Cars, achieving dramatic accuracy improvements of 19% and 16% in new and all classes, respectively, indicating a notable advantage in GCD.
Problem

Research questions and friction points this paper is trying to address.

Replace MLP head with neural field for better few-shot learning
Simplify training by separating feature extractor and classifier training
Enhance new category identification with optimized neural field interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces MLP head with neural field classifier
Uses static neural fields for few-shot learning
Simplifies training by separating feature extractor
Jiayi Su
Jiayi Su
Northeastern University
HCIHealth
D
Dequan Jin
School of Mathematics and Information Science, Guangxi University, Nanning, China 530004