TY - GEN
T1 - Cache telepathy
T2 - 29th USENIX Security Symposium
AU - Yan, Mengjia
AU - Fletcher, Christopher W.
AU - Torrellas, Josep
N1 - Publisher Copyright:
© 2020 by The USENIX Association. All Rights Reserved.
PY - 2020
Y1 - 2020
N2 - Deep Neural Networks (DNNs) are fast becoming ubiquitous for their ability to attain good accuracy in various machine learning tasks. A DNN's architecture (i.e., its hyper-parameters) broadly determines the DNN's accuracy and performance, and is often confidential. Attacking a DNN in the cloud to obtain its architecture can potentially provide major commercial value. Further, attaining a DNN's architecture facilitates other existing DNN attacks. This paper presents Cache Telepathy: an efficient mechanism to help obtain a DNN's architecture using the cache side channel. The attack is based on the insight that DNN inference relies heavily on tiled GEMM (Generalized Matrix Multiply), and that DNN architecture parameters determine the number of GEMM calls and the dimensions of the matrices used in the GEMM functions. Such information can be leaked through the cache side channel. This paper uses Prime+Probe and Flush+Reload to attack the VGG and ResNet DNNs running OpenBLAS and Intel MKL libraries. Our attack is effective in helping obtain the DNN architectures by very substantially reducing the search space of target DNN architectures. For example, when attacking the OpenBLAS library, for the different layers in VGG-16, it reduces the search space from more than 5.4 × 1012 architectures to just 16; for the different modules in ResNet-50, it reduces the search space from more than 6 × 1046 architectures to only 512.
AB - Deep Neural Networks (DNNs) are fast becoming ubiquitous for their ability to attain good accuracy in various machine learning tasks. A DNN's architecture (i.e., its hyper-parameters) broadly determines the DNN's accuracy and performance, and is often confidential. Attacking a DNN in the cloud to obtain its architecture can potentially provide major commercial value. Further, attaining a DNN's architecture facilitates other existing DNN attacks. This paper presents Cache Telepathy: an efficient mechanism to help obtain a DNN's architecture using the cache side channel. The attack is based on the insight that DNN inference relies heavily on tiled GEMM (Generalized Matrix Multiply), and that DNN architecture parameters determine the number of GEMM calls and the dimensions of the matrices used in the GEMM functions. Such information can be leaked through the cache side channel. This paper uses Prime+Probe and Flush+Reload to attack the VGG and ResNet DNNs running OpenBLAS and Intel MKL libraries. Our attack is effective in helping obtain the DNN architectures by very substantially reducing the search space of target DNN architectures. For example, when attacking the OpenBLAS library, for the different layers in VGG-16, it reduces the search space from more than 5.4 × 1012 architectures to just 16; for the different modules in ResNet-50, it reduces the search space from more than 6 × 1046 architectures to only 512.
UR - http://www.scopus.com/inward/record.url?scp=85090405239&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090405239&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85090405239
T3 - Proceedings of the 29th USENIX Security Symposium
SP - 2003
EP - 2020
BT - Proceedings of the 29th USENIX Security Symposium
PB - USENIX Association
Y2 - 12 August 2020 through 14 August 2020
ER -