TY - GEN
T1 - SemAttack
T2 - 2022 Findings of the Association for Computational Linguistics: NAACL 2022
AU - Wang, Boxin
AU - Xu, Chejian
AU - Liu, Xiangyu
AU - Cheng, Yu
AU - Li, Bo
N1 - We gratefully thank the anonymous reviewers and meta-reviewers for their constructive feedback. This work is partially supported by the NSF grant No.1910100, NSF CNS 20-46726 CAR, and Sloan Fellowship.
PY - 2022
Y1 - 2022
N2 - Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success rates or fail to search efficiently in the exponentially large perturbation space. We propose an efficient and effective framework SemAttack to generate natural adversarial text by constructing different semantic perturbation functions. In particular, SemAttack optimizes the generated perturbations constrained on generic semantic spaces, including typo space, knowledge space (e.g., WordNet), contextualized semantic space (e.g., the embedding space of BERT clusterings), or the combination of these spaces. Thus, the generated adversarial texts are more semantically close to the original inputs. Extensive experiments reveal that state-of-the-art (SOTA) large-scale LMs (e.g., DeBERTa-v2) and defense strategies (e.g., FreeLB) are still vulnerable to SemAttack. We further demonstrate that SemAttack is general and able to generate natural adversarial texts for different languages (e.g., English and Chinese) with high attack success rates. Human evaluations also confirm that our generated adversarial texts are natural and barely affect human performance. Our code is publicly available at https://github.com/ AI-secure/SemAttack.
AB - Recent studies show that pre-trained language models (LMs) are vulnerable to textual adversarial attacks. However, existing attack methods either suffer from low attack success rates or fail to search efficiently in the exponentially large perturbation space. We propose an efficient and effective framework SemAttack to generate natural adversarial text by constructing different semantic perturbation functions. In particular, SemAttack optimizes the generated perturbations constrained on generic semantic spaces, including typo space, knowledge space (e.g., WordNet), contextualized semantic space (e.g., the embedding space of BERT clusterings), or the combination of these spaces. Thus, the generated adversarial texts are more semantically close to the original inputs. Extensive experiments reveal that state-of-the-art (SOTA) large-scale LMs (e.g., DeBERTa-v2) and defense strategies (e.g., FreeLB) are still vulnerable to SemAttack. We further demonstrate that SemAttack is general and able to generate natural adversarial texts for different languages (e.g., English and Chinese) with high attack success rates. Human evaluations also confirm that our generated adversarial texts are natural and barely affect human performance. Our code is publicly available at https://github.com/ AI-secure/SemAttack.
UR - http://www.scopus.com/inward/record.url?scp=85137337269&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137337269&partnerID=8YFLogxK
U2 - 10.18653/v1/2022.findings-naacl.14
DO - 10.18653/v1/2022.findings-naacl.14
M3 - Conference contribution
AN - SCOPUS:85137337269
T3 - Findings of the Association for Computational Linguistics: NAACL 2022 - Findings
SP - 176
EP - 205
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 10 July 2022 through 15 July 2022
ER -