TY - GEN
T1 - Exploring connections between active learning and model extraction
AU - Chandrasekaran, Varun
AU - Chaudhuri, Kamalika
AU - Giacomelli, Irene
AU - Jha, Somesh
AU - Yan, Songbai
N1 - Funding Information:
10 Acknowledgements This material is partially supported by Air Force Grant FA9550-18-1-0166, the National Science Foundation (NSF) Grants CCF-FMitF-1836978, SaTC-Frontiers-1804648, CCF-1652140, CNS-1838733, CNS-1719336, CNS-1647152, CNS-1629833 and ARO grant number W911NF-17-1-0405. Kama-lika Chaudhuri and Songbai Yan thank NSF under 1719133 and 1804829 for research support.
Publisher Copyright:
© 2020 by The USENIX Association. All Rights Reserved.
PY - 2020
Y1 - 2020
N2 - Machine learning is being increasingly used by individuals, research institutions, and corporations. This has resulted in the surge of Machine Learning-as-a-Service (MLaaS) - cloud services that provide (a) tools and resources to learn the model, and (b) a user-friendly query interface to access the model. However, such MLaaS systems raise concerns such as model extraction. In model extraction attacks, adversaries maliciously exploit the query interface to steal the model. More precisely, in a model extraction attack, a good approximation of a sensitive or proprietary model held by the server is extracted (i.e. learned) by a dishonest user who interacts with the server only via the query interface. This attack was introduced by Tramèr et al. at the 2016 USENIX Security Symposium, where practical attacks for various models were shown. We believe that better understanding the efficacy of model extraction attacks is paramount to designing secure MLaaS systems. To that end, we take the first step by (a) formalizing model extraction and discussing possible defense strategies, and (b) drawing parallels between model extraction and established area of active learning. In particular, we show that recent advancements in the active learning domain can be used to implement powerful model extraction attacks, and investigate possible defense strategies.
AB - Machine learning is being increasingly used by individuals, research institutions, and corporations. This has resulted in the surge of Machine Learning-as-a-Service (MLaaS) - cloud services that provide (a) tools and resources to learn the model, and (b) a user-friendly query interface to access the model. However, such MLaaS systems raise concerns such as model extraction. In model extraction attacks, adversaries maliciously exploit the query interface to steal the model. More precisely, in a model extraction attack, a good approximation of a sensitive or proprietary model held by the server is extracted (i.e. learned) by a dishonest user who interacts with the server only via the query interface. This attack was introduced by Tramèr et al. at the 2016 USENIX Security Symposium, where practical attacks for various models were shown. We believe that better understanding the efficacy of model extraction attacks is paramount to designing secure MLaaS systems. To that end, we take the first step by (a) formalizing model extraction and discussing possible defense strategies, and (b) drawing parallels between model extraction and established area of active learning. In particular, we show that recent advancements in the active learning domain can be used to implement powerful model extraction attacks, and investigate possible defense strategies.
UR - http://www.scopus.com/inward/record.url?scp=85091915276&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091915276&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85091915276
T3 - Proceedings of the 29th USENIX Security Symposium
SP - 1309
EP - 1326
BT - Proceedings of the 29th USENIX Security Symposium
PB - USENIX Association
T2 - 29th USENIX Security Symposium
Y2 - 12 August 2020 through 14 August 2020
ER -