🤖 AI Summary
Few-Shot Open-Set Recognition (FSOSR) aims to jointly discriminate known (closed-set) and unknown (open-set) classes, yet existing transfer learning paradigms struggle to generalize to open-world settings. This paper proposes a two-stage framework: first, open-set-aware meta-learning constructs a robust, transferable metric space; second, lightweight, open-set-agnostic fine-tuning is performed within this space. The key contribution is the first successful adaptation of transfer learning to FSOSR, achieved via an open-set simulation strategy that combines data perturbation and pseudo-sample generation—thereby decoupling meta-training from downstream adaptation. Our method achieves state-of-the-art performance on miniImageNet and tieredImageNet, with only a 1.5% increase in training overhead.
📝 Abstract
Few-Shot Open-Set Recognition (FSOSR) targets a critical real-world challenge, aiming to categorize inputs into known categories, termed closed-set classes, while identifying open-set inputs that fall outside these classes. Although transfer learning where a model is tuned to a given few-shot task has become a prominent paradigm in closed-world, we observe that it fails to expand to open-world. To unlock this challenge, we propose a two-stage method which consists of open-set aware meta-learning with open-set free transfer learning. In the open-set aware meta-learning stage, a model is trained to establish a metric space that serves as a beneficial starting point for the subsequent stage. During the open-set free transfer learning stage, the model is further adapted to a specific target task through transfer learning. Additionally, we introduce a strategy to simulate open-set examples by modifying the training dataset or generating pseudo open-set examples. The proposed method achieves state-of-the-art performance on two widely recognized benchmarks, miniImageNet and tieredImageNet, with only a 1.5% increase in training effort. Our work demonstrates the effectiveness of transfer learning in FSOSR.