Protecting Model Adaptation from Trojans in the Unlabeled Data

๐Ÿ“… 2024-01-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work identifies a novel threat in label-free domain adaptation: Trojan attacks wherein adversaries poison unlabeled target samples to embed backdoors, achieving high attack success rates without degrading clean-sample performance. To address this, we propose DiffAdaptโ€”the first plug-and-play defense framework that requires no modification to the underlying adaptation algorithm and operates without access to source data or labels. Its core is a lightweight detection mechanism grounded in feature-difference modeling and gradient-sensitivity analysis, compatible with mainstream unsupervised and semi-supervised adaptation methods (e.g., SHOT, NRC, DC&DN). Evaluated on Office-Home, VisDA, and DomainNet benchmarks, DiffAdapt reduces backdoor attack success rates to below 5%, while incurring less than 1.2% accuracy degradation on clean samples. This significantly enhances the security and robustness of domain-adapted models against stealthy poisoning threats.

Technology Category

Application Category

๐Ÿ“ Abstract
Model adaptation tackles the distribution shift problem with a pre-trained model instead of raw data, which has become a popular paradigm due to its great privacy protection. Existing methods always assume adapting to a clean target domain, overlooking the security risks of unlabeled samples. This paper for the first time explores the potential trojan attacks on model adaptation launched by well-designed poisoning target data. Concretely, we provide two trigger patterns with two poisoning strategies for different prior knowledge owned by attackers. These attacks achieve a high success rate while maintaining the normal performance on clean samples in the test stage. To defend against such backdoor injection, we propose a plug-and-play method named DiffAdapt, which can be seamlessly integrated with existing adaptation algorithms. Experiments across commonly used benchmarks and adaptation methods demonstrate the effectiveness of DiffAdapt. We hope this work will shed light on the safety of transfer learning with unlabeled data.
Problem

Research questions and friction points this paper is trying to address.

Explores trojan attacks on model adaptation
Proposes defense against backdoor injection
Ensures safety in transfer learning with unlabeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes DiffAdapt defense method
Explores trojan attacks on adaptation
Uses plug-and-play integration