Abstract
Multi-source domain adaptation studies the knowledge transferability from multiple labeled source domains to an unlabeled target domain under a distribution shift. However, little effort has been devoted to studying the adversarial vulnerability of multi-source domain adaptation approaches. Specifically, most existing techniques focus on learning the domain-invariant representation to mitigate the distribution shift across domains. In this paper, we theoretically show that the domain-invariant representation cannot guarantee the success of multi-source domain adaptation, when no labeled samples are available in the target domain. This result motivates us to propose a unified framework (AdaptAttack) for data poisoning adversarial attacks on multi-source domain adaptation. The key idea is to maliciously manipulate the label-informed data distributions of source domains by injecting perceptibly unnoticeable noise into the source data. In addition, it requires that the generated adversarial attacks are invisible to multi-source domain adaptation algorithms, i.e., the source classification errors and marginal discrepancies across domains are not negatively affected. Extensive experiments on public domain adaptation benchmarks confirm the effectiveness and computational efficiency of our proposed AdaptAttack framework in both white-box and black-box attack scenarios.
Original language | English (US) |
---|---|
Pages (from-to) | 11039-11050 |
Number of pages | 12 |
Journal | IEEE Transactions on Knowledge and Data Engineering |
Volume | 35 |
Issue number | 11 |
DOIs | |
State | Published - Nov 1 2023 |
Keywords
- Adversarial attacks
- domain discrepancy
- multi-source domain adaptation
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Computational Theory and Mathematics