π€ AI Summary
This work proposes the first entirely training-free continual learning framework for class-incremental semantic segmentation, addressing the high computational cost and catastrophic forgetting inherent in existing approaches. The method decouples the task into two stages: zero-shot region extraction leveraging the Segment Anything Model, followed by prototype-based semantic association. Within a fixed feature space, it models semantic concepts through multi-prototype selective intra-class clustering. By eliminating the need for model updates, the framework completely avoids catastrophic forgetting and exhibits both forward and backward knowledge transfer. Remarkably, it outperforms most training-based methods on standard CISS benchmarks, demonstrating particularly stable performance over long task sequences.
π Abstract
Continual learning remains constrained by the need for repeated retraining, high computational costs, and the persistent challenge of forgetting. These factors significantly limit the applicability of continuous learning in real-world settings, as iterative model updates require significant computational resources and inherently exacerbate forgetting. We present SAILS -- Segment Anything with Incrementally Learned Semantics, a training-free framework for Class-Incremental Semantic Segmentation (CISS) that sidesteps these challenges entirely. SAILS leverages foundational models to decouple CISS into two stages: Zero-shot region extraction using Segment Anything Model (SAM), followed by semantic association through prototypes in a fixed feature space. SAILS incorporates selective intra-class clustering, resulting in multiple prototypes per class to better model intra-class variability. Our results demonstrate that, despite requiring no incremental training, SAILS typically surpasses the performance of existing training-based approaches on standard CISS datasets, particularly in long and challenging task sequences where forgetting tends to be most severe. By avoiding parameter updates, SAILS completely eliminates forgetting and maintains consistent, task-invariant performance. Furthermore, SAILS exhibits positive backward transfer, where the introduction of new classes can enhance performance on previous classes.