🤖 AI Summary
To address the reduced reliability and tendency toward erroneous or redundant sample selection in active learning (AL) under label noise and distributional shift, this paper proposes a robust sample selection method grounded in the geometric structure of neural collapse. The core innovation lies in a dual-signal mechanism: (i) class-mean alignment perturbation scoring, measuring structural stability; and (ii) cross-checkpoint feature fluctuation scoring, quantifying representation consistency. Neither requiring additional annotations nor model modifications, our method inherently suppresses noise interference and avoids redundant sampling. Extensive experiments on ImageNet-100 and CIFAR-100 demonstrate that our approach significantly outperforms state-of-the-art AL methods under identical or lower annotation budgets. Moreover, it exhibits superior robustness under synthetic label noise and out-of-distribution scenarios, validating its effectiveness in challenging real-world settings.
📝 Abstract
Active Learning (AL) promises to reduce annotation cost by prioritizing informative samples, yet its reliability is undermined when labels are noisy or when the data distribution shifts. In practice, annotators make mistakes, rare categories are ambiguous, and conventional AL heuristics (uncertainty, diversity) often amplify such errors by repeatedly selecting mislabeled or redundant samples. We propose Reliable Active Learning via Neural Collapse Geometry (NCAL-R), a framework that leverages the emergent geometric regularities of deep networks to counteract unreliable supervision. Our method introduces two complementary signals: (i) a Class-Mean Alignment Perturbation score, which quantifies how candidate samples structurally stabilize or distort inter-class geometry, and (ii) a Feature Fluctuation score, which captures temporal instability of representations across training checkpoints. By combining these signals, NCAL-R prioritizes samples that both preserve class separation and highlight ambiguous regions, mitigating the effect of noisy or redundant labels. Experiments on ImageNet-100 and CIFAR100 show that NCAL-R consistently outperforms standard AL baselines, achieving higher accuracy with fewer labels, improved robustness under synthetic label noise, and stronger generalization to out-of-distribution data. These results suggest that incorporating geometric reliability criteria into acquisition decisions can make Active Learning less brittle to annotation errors and distribution shifts, a key step toward trustworthy deployment in real-world labeling pipelines. Our code is available at https://github.com/Vision-IIITD/NCAL.