Noise-Aware Generalization: Robustness to In-Domain Noise and Out-of-Domain Generalization

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the performance degradation of multi-source domain generalization (DG) under label noise, proposing a novel setting termed “Noise-Aware Generalization” (NAG) that systematically models the coupled challenges of label noise and domain shift for the first time. To decouple these two sources of perturbation, we introduce DL4ND—a domain-label-free method that detects noisy labels robustly via cross-domain sample consistency discrepancies, without requiring domain annotations. DL4ND further employs contrastive learning to model noise-sensitive features and integrates dynamic label correction with a robust loss optimization strategy. Evaluated on four heterogeneous benchmarks, DL4ND significantly outperforms state-of-the-art methods, achieving an average accuracy gain of 4.2%. The results empirically validate that cross-domain variation analysis effectively enhances noise-robust generalization capability.

Technology Category

Application Category

📝 Abstract
Multi-source Domain Generalization (DG) aims to improve model robustness to new distributions. However, DG methods often overlook the effect of label noise, which can confuse a model during training, reducing performance. Limited prior work has analyzed DG method's noise-robustness, typically focused on an analysis of existing methods rather than new solutions. In this paper, we investigate this underexplored space, where models are evaluated under both distribution shifts and label noise, which we refer to as Noise-Aware Generalization (NAG). A natural solution to address label noise would be to combine a Learning with Noisy Labels (LNL) method with those from DG. Many LNL methods aim to detect distribution shifts in a class's samples, i.e., they assume that distribution shifts often correspond to label noise. However, in NAG distribution shifts can be due to label noise or domain shifts, breaking the assumptions used by LNL methods. A naive solution is to make a similar assumption made by many DG methods, where we presume to have domain labels during training, enabling us to isolate the two types of shifts. However, this ignores valuable cross-domain information. Specifically, our proposed DL4ND approach improves noise detection by taking advantage of the observation that noisy samples that may appear indistinguishable within a single domain often show greater variation when compared across domains. Experiments show that DL4ND significantly improves performance across four diverse datasets, offering a promising direction for tackling NAG.
Problem

Research questions and friction points this paper is trying to address.

Analyzes Noise-Aware Generalization under distribution shifts and label noise
Combines Learning with Noisy Labels and Domain Generalization methods
Proposes DL4ND to improve noise detection using cross-domain information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Learning with Noisy Labels and Domain Generalization
Detects noise using cross-domain sample variation
Improves robustness to both noise and domain shifts
🔎 Similar Papers
No similar papers found.
S
Siqi Wang
Boston University
Aoming Liu
Aoming Liu
Ph.D student, Boston University
Computer VisionMachine Learning
B
Bryan A. Plummer
Boston University