🤖 AI Summary
Few-shot class-incremental learning (FSCIL) suffers from the dual challenges of catastrophic forgetting of old classes and overfitting to novel classes. Method: This paper proposes a brain-inspired, analogy-driven prototypical learning framework that integrates statistical analogy into hybrid prototype modeling. It introduces a statistical analogy-based calibration strategy for novel-class prototype distributions and a prototype covariance adaptation mechanism to mitigate knowledge forgetting. Furthermore, it employs Mahalanobis distance metric learning and soft-voting ensemble to enhance discriminative robustness, with end-to-end optimization built upon fine-tuned Vision Transformers. Contribution/Results: The method achieves state-of-the-art performance across multiple FSCIL benchmarks, significantly improving the balance between accuracy on novel and base classes. Experimental results validate the effectiveness and generalization advantage of analogy-driven prototypical modeling for few-shot continual learning.
📝 Abstract
Few-shot class-incremental learning (FSCIL) poses significant challenges for artificial neural networks due to the need to efficiently learn from limited data while retaining knowledge of previously learned tasks. Inspired by the brain's mechanisms for categorization and analogical learning, we propose a novel approach called Brain-inspired Analogical Mixture Prototypes (BAMP). BAMP has three components: mixed prototypical feature learning, statistical analogy, and soft voting. Starting from a pre-trained Vision Transformer (ViT), mixed prototypical feature learning represents each class using a mixture of prototypes and fine-tunes these representations during the base session. The statistical analogy calibrates the mean and covariance matrix of prototypes for new classes according to similarity to the base classes, and computes classification score with Mahalanobis distance. Soft voting combines both merits of statistical analogy and an off-shelf FSCIL method. Our experiments on benchmark datasets demonstrate that BAMP outperforms state-of-the-art on both traditional big start FSCIL setting and challenging small start FSCIL setting. The study suggests that brain-inspired analogical mixture prototypes can alleviate catastrophic forgetting and over-fitting problems in FSCIL.