Reliable Active Learning from Unreliable Labels via Neural Collapse Geometry

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the reduced reliability and tendency toward erroneous or redundant sample selection in active learning (AL) under label noise and distributional shift, this paper proposes a robust sample selection method grounded in the geometric structure of neural collapse. The core innovation lies in a dual-signal mechanism: (i) class-mean alignment perturbation scoring, measuring structural stability; and (ii) cross-checkpoint feature fluctuation scoring, quantifying representation consistency. Neither requiring additional annotations nor model modifications, our method inherently suppresses noise interference and avoids redundant sampling. Extensive experiments on ImageNet-100 and CIFAR-100 demonstrate that our approach significantly outperforms state-of-the-art AL methods under identical or lower annotation budgets. Moreover, it exhibits superior robustness under synthetic label noise and out-of-distribution scenarios, validating its effectiveness in challenging real-world settings.

Technology Category

Application Category

📝 Abstract
Active Learning (AL) promises to reduce annotation cost by prioritizing informative samples, yet its reliability is undermined when labels are noisy or when the data distribution shifts. In practice, annotators make mistakes, rare categories are ambiguous, and conventional AL heuristics (uncertainty, diversity) often amplify such errors by repeatedly selecting mislabeled or redundant samples. We propose Reliable Active Learning via Neural Collapse Geometry (NCAL-R), a framework that leverages the emergent geometric regularities of deep networks to counteract unreliable supervision. Our method introduces two complementary signals: (i) a Class-Mean Alignment Perturbation score, which quantifies how candidate samples structurally stabilize or distort inter-class geometry, and (ii) a Feature Fluctuation score, which captures temporal instability of representations across training checkpoints. By combining these signals, NCAL-R prioritizes samples that both preserve class separation and highlight ambiguous regions, mitigating the effect of noisy or redundant labels. Experiments on ImageNet-100 and CIFAR100 show that NCAL-R consistently outperforms standard AL baselines, achieving higher accuracy with fewer labels, improved robustness under synthetic label noise, and stronger generalization to out-of-distribution data. These results suggest that incorporating geometric reliability criteria into acquisition decisions can make Active Learning less brittle to annotation errors and distribution shifts, a key step toward trustworthy deployment in real-world labeling pipelines. Our code is available at https://github.com/Vision-IIITD/NCAL.
Problem

Research questions and friction points this paper is trying to address.

Addresses unreliable active learning with noisy labels and data shifts
Mitigates annotation errors amplified by conventional AL heuristics
Improves robustness to label noise and out-of-distribution generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages neural collapse geometry for reliable active learning
Introduces class-mean alignment perturbation scoring
Combines feature fluctuation with geometric stability signals
🔎 Similar Papers
No similar papers found.
A
Atharv Goel
IIIT Delhi
S
Sharat Agarwal
IIIT Delhi
Saket Anand
Saket Anand
Associate Professor, Indraprastha Institute of Information Technology
Computer VisionMachine LearningDeep Learning
C
Chetan Arora
IIT Delhi