Advancing Analytic Class-Incremental Learning through Vision-Language Calibration

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in class-incremental learning where pretrained models suffer from representational rigidity, causing analytic learning to be vulnerable to cumulative errors and feature incompatibility, thereby struggling to balance adaptation efficiency with long-term stability. To overcome this, the paper proposes VILA, a dual-branch framework that systematically uncovers, for the first time, the failure mechanisms of analytic class-incremental learning. It introduces a two-level calibration strategy: at the feature level, it fuses task-adaptive features with frozen semantic anchors; at the decision level, it leverages cross-modal semantic priors to correct prediction bias. This approach effectively mitigates representational rigidity and achieves state-of-the-art performance across eight benchmarks, notably unifying high accuracy and efficiency in fine-grained and long-sequence scenarios.

Technology Category

Application Category

📝 Abstract
Class-incremental learning (CIL) with pre-trained models (PTMs) faces a critical trade-off between efficient adaptation and long-term stability. While analytic learning enables rapid, recursive closed-form updates, its efficacy is often compromised by accumulated errors and feature incompatibility. In this paper, we first conduct a systematic study to dissect the failure modes of PTM-based analytic CIL, identifying representation rigidity as the primary bottleneck. Motivated by these insights, we propose \textbf{VILA}, a novel dual-branch framework that advances analytic CIL via a two-level vision-language calibration strategy. Specifically, we coherently fuse plastic, task-adapted features with a frozen, universal semantic anchor at the feature level through geometric calibration, and leverage cross-modal priors at the decision level to rectify prediction bias. This confluence maintains analytic-learning's extreme efficiency while overcoming its inherent brittleness. Extensive experiments across eight benchmarks demonstrate that VILA consistently yields superior performance, particularly in fine-grained and long-sequence scenarios. Our framework harmonizes high-fidelity prediction with the simplicity of analytic learning. Our code is available at https://github.com/byzhaoAI/VILA
Problem

Research questions and friction points this paper is trying to address.

Class-Incremental Learning
Pre-trained Models
Analytic Learning
Representation Rigidity
Feature Incompatibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

analytic class-incremental learning
vision-language calibration
representation rigidity
dual-branch framework
cross-modal priors
🔎 Similar Papers
No similar papers found.
B
Binyu Zhao
School of Computer Science and Technology, Harbin Institute of Technology, China
W
Wei Zhang
School of Computer Science and Technology, Harbin Institute of Technology, China
Xingrui Yu
Xingrui Yu
Scientist, CFAR, A*STAR
Machine LearningRobust Imitation LearningTrustworthy AI
Zhaonian Zou
Zhaonian Zou
Harbin Institute of Technology, China
DatabasesData Mining
I
Ivor Tsang
Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), Singapore