TY - GEN
T1 - Game of threads
T2 - 25th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2020
AU - Vicarte, Jose Rodrigo Sanchez
AU - Schreiber, Benjamin
AU - Paccagnella, Riccardo
AU - Fletcher, Christopher W.
N1 - Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/3/9
Y1 - 2020/3/9
N2 - As data sizes continue to grow at an unprecedented rate, machine learning training is being forced to adopt asynchronous algorithms to maintain performance and scalability. In asynchronous training, many threads share and update model parameters in a racy fashion to avoid costly inter-thread synchronization. This paper studies the security implications of these codes by introducing asynchronous poisoning attacks. Our attack influences training outcome-e.g., degrades model accuracy or biases the model towards an adversary-specified label-purely by scheduling asynchronous training threads in a malicious fashion. Since thread scheduling is outside the protections of modern trusted execution environments (TEEs), e.g., Intel SGX, our attack bypasses these protections even when the training set can be verified as correct. To the best of our knowledge, this represents the first example where a class of applications loses integrity guarantees, despite being protected by enclave-based TEEs such as SGX. We demonstrate both accuracy degradation and model biasing attacks on the CIFAR-10 image recognition task, trained on Resnet-style DNNs using an asynchronous training code published by Pytorch. We also perform proof-of-concept experiments to validate our assumptions on an SGX-enabled machine. Our accuracy degradation attacks are capable of returning a converged model to pre-trained accuracy or to some accuracy in between. Our model biasing attack can force the model to predict an adversary-specified label up to ~ 40% of the time on the CIFAR-10 validation set, depending on parameters. (Whereas the un-attacked model's prediction rate towards any label is ~ 10%.).
AB - As data sizes continue to grow at an unprecedented rate, machine learning training is being forced to adopt asynchronous algorithms to maintain performance and scalability. In asynchronous training, many threads share and update model parameters in a racy fashion to avoid costly inter-thread synchronization. This paper studies the security implications of these codes by introducing asynchronous poisoning attacks. Our attack influences training outcome-e.g., degrades model accuracy or biases the model towards an adversary-specified label-purely by scheduling asynchronous training threads in a malicious fashion. Since thread scheduling is outside the protections of modern trusted execution environments (TEEs), e.g., Intel SGX, our attack bypasses these protections even when the training set can be verified as correct. To the best of our knowledge, this represents the first example where a class of applications loses integrity guarantees, despite being protected by enclave-based TEEs such as SGX. We demonstrate both accuracy degradation and model biasing attacks on the CIFAR-10 image recognition task, trained on Resnet-style DNNs using an asynchronous training code published by Pytorch. We also perform proof-of-concept experiments to validate our assumptions on an SGX-enabled machine. Our accuracy degradation attacks are capable of returning a converged model to pre-trained accuracy or to some accuracy in between. Our model biasing attack can force the model to predict an adversary-specified label up to ~ 40% of the time on the CIFAR-10 validation set, depending on parameters. (Whereas the un-attacked model's prediction rate towards any label is ~ 10%.).
UR - http://www.scopus.com/inward/record.url?scp=85082400975&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082400975&partnerID=8YFLogxK
U2 - 10.1145/3373376.3378462
DO - 10.1145/3373376.3378462
M3 - Conference contribution
AN - SCOPUS:85082400975
T3 - International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS
SP - 35
EP - 52
BT - ASPLOS 2020 - 25th International Conference on Architectural Support for Programming Languages and Operating Systems
PB - Association for Computing Machinery
Y2 - 16 March 2020 through 20 March 2020
ER -