🤖 AI Summary
This work addresses the degradation of model generalization caused by label noise in deep learning by establishing, for the first time, a theoretical connection between label noise and the flatness of the loss landscape. Building upon the Sharpness-Aware Minimization (SAM) framework, the authors propose a novel optimization strategy that introduces controllable, synthetic label noise perturbations during training to counteract the adverse effects of real label noise. This noise-compensation mechanism effectively steers optimization toward flatter minima that are more robust to label corruption. Extensive experiments across multiple benchmark datasets demonstrate that the proposed method significantly outperforms existing approaches, yielding substantial improvements in both generalization performance and robustness under noisy labeling conditions.
📝 Abstract
Learning from Noisy Labels (LNL) presents a fundamental challenge in deep learning, as real-world datasets often contain erroneous or corrupted annotations, \textit{e.g.}, data crawled from Web. Current research focuses on sophisticated label correction mechanisms. In contrast, this paper adopts a novel perspective by establishing a theoretical analysis the relationship between flatness of the loss landscape and the presence of label noise. In this paper, we theoretically demonstrate that carefully simulated label noise synergistically enhances both the generalization performance and robustness of label noises. Consequently, we propose Noise-Compensated Sharpness-aware Minimization (NCSAM) to leverage the perturbation of Sharpness-Aware Minimization (SAM) to remedy the damage of label noises. Our analysis reveals that the testing accuracy exhibits a similar behavior that has been observed on the noise-clear dataset. Extensive experimental results on multiple benchmark datasets demonstrate the consistent superiority of the proposed method over existing state-of-the-art approaches on diverse tasks.