TY - GEN
T1 - Exploiting Programmatic Behavior of LLMs
T2 - 45th IEEE Symposium on Security and Privacy Workshops, SPW 2024
AU - Kang, Daniel
AU - Li, Xuechen
AU - Stoica, Ion
AU - Guestrin, Carlos
AU - Zaharia, Matei
AU - Hashimoto, Tatsunori
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Recent advances in instruction-following large language models (LLMs) have led to dramatic improvements in a range of NLP tasks. Unfortunately, we find that the same improved capabilities amplify the dual-use risks for malicious purposes of these models. Dual-use is difficult to prevent as instruction-following capabilities now enable standard attacks from computer security. The capabilities of these instruction-following LLMs provide strong economic incentives for dual-use by malicious actors. In particular, we show that instruction-following LLMs can produce targeted malicious content, including hate speech and scams, bypassing in-the-wild defenses implemented by LLM API vendors. Our analysis shows that this content can be generated economically and at cost of 125-500 \times cheaper than human effort alone. Together, our findings suggest that LLMs will increasingly attract more sophisticated adversaries and attacks, and addressing these attacks may require new approaches to mitigations.
AB - Recent advances in instruction-following large language models (LLMs) have led to dramatic improvements in a range of NLP tasks. Unfortunately, we find that the same improved capabilities amplify the dual-use risks for malicious purposes of these models. Dual-use is difficult to prevent as instruction-following capabilities now enable standard attacks from computer security. The capabilities of these instruction-following LLMs provide strong economic incentives for dual-use by malicious actors. In particular, we show that instruction-following LLMs can produce targeted malicious content, including hate speech and scams, bypassing in-the-wild defenses implemented by LLM API vendors. Our analysis shows that this content can be generated economically and at cost of 125-500 \times cheaper than human effort alone. Together, our findings suggest that LLMs will increasingly attract more sophisticated adversaries and attacks, and addressing these attacks may require new approaches to mitigations.
UR - http://www.scopus.com/inward/record.url?scp=85199174513&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85199174513&partnerID=8YFLogxK
U2 - 10.1109/SPW63631.2024.00018
DO - 10.1109/SPW63631.2024.00018
M3 - Conference contribution
AN - SCOPUS:85199174513
T3 - Proceedings - 45th IEEE Symposium on Security and Privacy Workshops, SPW 2024
SP - 132
EP - 143
BT - Proceedings - 45th IEEE Symposium on Security and Privacy Workshops, SPW 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2024
ER -