🤖 AI Summary
Deep neural networks often fail to generalize due to spurious correlations—weak yet harmful statistical associations learned from training data—especially when such spurious signals are weaker than the primary semantic signal and thus difficult to detect, rendering conventional debiasing methods ineffective. This work identifies that such spurious correlations are predominantly driven by a small subset of samples containing spurious features. To address this, we propose a fully unsupervised data pruning framework that requires no prior knowledge, domain-specific assumptions, or sample-level spurious labels. Leveraging gradient sensitivity analysis and influence estimation, our method automatically identifies and iteratively removes critical bias-inducing samples. Evaluated on standard benchmarks including Waterbirds and CelebA, our approach achieves state-of-the-art debiasing performance and significantly improves out-of-distribution robustness.
📝 Abstract
Deep neural networks have been shown to learn and rely on spurious correlations present in the data that they are trained on. Reliance on such correlations can cause these networks to malfunction when deployed in the real world, where these correlations may no longer hold. To overcome the learning of and reliance on such correlations, recent studies propose approaches that yield promising results. These works, however, study settings where the strength of the spurious signal is significantly greater than that of the core, invariant signal, making it easier to detect the presence of spurious features in individual training samples and allow for further processing. In this paper, we identify new settings where the strength of the spurious signal is relatively weaker, making it difficult to detect any spurious information while continuing to have catastrophic consequences. We also discover that spurious correlations are learned primarily due to only a handful of all the samples containing the spurious feature and develop a novel data pruning technique that identifies and prunes small subsets of the training data that contain these samples. Our proposed technique does not require inferred domain knowledge, information regarding the sample-wise presence or nature of spurious information, or human intervention. Finally, we show that such data pruning attains state-of-the-art performance on previously studied settings where spurious information is identifiable.