One-Shot Domain Incremental Learning

📅 2024-03-25
🏛️ IEEE International Joint Conference on Neural Network
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses one-shot domain incremental learning (DIL), an extreme setting where only a single sample from a new domain is available. Existing DIL methods suffer severe performance degradation in this regime due to significant shifts in batch normalization (BN) statistics. We formally define the one-shot DIL task and identify BN statistic miscalibration as the fundamental bottleneck. To address it, we propose a lightweight statistic calibration mechanism that synergistically integrates domain adaptation and meta-learning principles to dynamically re-estimate BN parameters. Crucially, our method requires no fine-tuning of the backbone network and achieves efficient adaptation using only one sample per novel domain. Extensive experiments on multiple standard benchmarks demonstrate that our approach improves average accuracy on single-sample novel domains by 27.3% over state-of-the-art DIL methods, validating its effectiveness, generalizability, and practicality.

Technology Category

Application Category

📝 Abstract
Domain incremental learning (DIL) has been discussed in previous studies on deep neural network models for classification. In DIL, we assume that samples on new domains are observed over time. The models must classify inputs on all domains. In practice, however, we may encounter a situation where we need to perform DIL under the constraint that the samples on the new domain are observed only infrequently. Therefore, in this study, we consider the extreme case where we have only one sample from the new domain, which we call one-shot DIL. We first empirically show that existing DIL methods do not work well in one-shot DIL. We have analyzed the reason for this failure through various investigations. According to our analysis, we clarify that the difficulty of one-shot DIL is caused by the statistics in the batch normalization layers. Therefore, we propose a technique regarding these statistics and demonstrate the effectiveness of our technique through experiments on open datasets.
Problem

Research questions and friction points this paper is trying to address.

One-shot domain incremental learning challenge
Failure of existing DIL methods explained
Proposed technique for batch normalization statistics
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-shot domain incremental learning
Statistics in batch normalization
Effective technique proposed
🔎 Similar Papers
No similar papers found.
Yasushi Esaki
Yasushi Esaki
Toyota Central R&D Labs., Inc.
machine learningdeep learning
S
Satoshi Koide
Toyota Central R&D Labs., Inc., Aichi, Japan
T
Takuro Kutsuna
Toyota Central R&D Labs., Inc., Aichi, Japan