TMLC-Net: Transferable Meta Label Correction for Noisy Label Learning

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Noisy labels in real-world data severely hinder the robust deployment of deep learning models, while existing meta-learning approaches suffer from poor generalizability and heavy reliance on task-specific customization. To address this, we propose TMLC-Net, a transferable meta-learning label correction framework that establishes, for the first time, a universal, retraining-free label correction paradigm across datasets and model architectures. Its core innovations include: (1) a normalized noise-aware module that adaptively models sample-wise confidence; (2) an RNN-based temporal encoding module that captures dynamic training evolution patterns; and (3) a sub-class distribution decoding module enabling fine-grained modeling of noise structure. Extensive experiments under diverse noise types and intensities demonstrate that TMLC-Net significantly outperforms state-of-the-art methods. Crucially, it exhibits strong cross-dataset and cross-noise-type generalization, achieving substantial improvements in both classification accuracy and robustness.

Technology Category

Application Category

📝 Abstract
The prevalence of noisy labels in real-world datasets poses a significant impediment to the effective deployment of deep learning models. While meta-learning strategies have emerged as a promising approach for addressing this challenge, existing methods often suffer from limited transferability and task-specific designs. This paper introduces TMLC-Net, a novel Transferable Meta-Learner for Correcting Noisy Labels, designed to overcome these limitations. TMLC-Net learns a general-purpose label correction strategy that can be readily applied across diverse datasets and model architectures without requiring extensive retraining or fine-tuning. Our approach integrates three core components: (1) Normalized Noise Perception, which captures and normalizes training dynamics to handle distribution shifts; (2) Time-Series Encoding, which models the temporal evolution of sample statistics using a recurrent neural network; and (3) Subclass Decoding, which predicts a corrected label distribution based on the learned representations. We conduct extensive experiments on benchmark datasets with various noise types and levels, demonstrating that TMLC-Net consistently outperforms state-of-the-art methods in terms of both accuracy and robustness to label noise. Furthermore, we analyze the transferability of TMLC-Net, showcasing its adaptability to new datasets and noise conditions, and establishing its potential as a broadly applicable solution for robust deep learning in noisy environments.
Problem

Research questions and friction points this paper is trying to address.

Improves noisy label correction in deep learning
Enhances transferability across datasets and models
Introduces adaptable strategies for various noise types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transferable Meta-Learner
Normalized Noise Perception
Time-Series Encoding
🔎 Similar Papers
No similar papers found.