TY - GEN
T1 - Image reconstruction attacks on distributed machine learning models
AU - Benkraouda, Hadjer
AU - Nahrstedt, Klara
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/12/7
Y1 - 2021/12/7
N2 - Recent developments in Deep Neural Networks have resulted in their wide deployment for services around many aspects of human life, including security critical domains that handle sensitive data. Congruently, we have seen a proliferation of IoT devices with limited resources. Together, these two trends have led to the distribution of data analysis, processing, and decision making between edge devices and third parties such as cloud services. In this work we assess the security of the previously proposed distributed machine learning (ML) schemes by analyzing the information leaked from the output of the edge devices, i.e. the intermediate representation (IR). We particularly look at a Deep Neural Network that is used for video/image classification and tackle the problem of image/frame reconstruction from the output of the edge device. Our work focuses on assessing whether the proposed scheme of partitioned enclave execution is secure against chosen-image attacks (CIA). Given the attacker has the capability of querying the model under attack (victim model) to create image-IR pairs, can the attacker reconstruct the private input images? In this work we show that it is possible to carry out a black-box reconstruction attack by training a CNN based encoder-decoder architecture (reconstruction model) using image-IR pairs. Our tests show that the proposed reconstruction model achieves a 70% similarity between the original image and the reconstructed image.
AB - Recent developments in Deep Neural Networks have resulted in their wide deployment for services around many aspects of human life, including security critical domains that handle sensitive data. Congruently, we have seen a proliferation of IoT devices with limited resources. Together, these two trends have led to the distribution of data analysis, processing, and decision making between edge devices and third parties such as cloud services. In this work we assess the security of the previously proposed distributed machine learning (ML) schemes by analyzing the information leaked from the output of the edge devices, i.e. the intermediate representation (IR). We particularly look at a Deep Neural Network that is used for video/image classification and tackle the problem of image/frame reconstruction from the output of the edge device. Our work focuses on assessing whether the proposed scheme of partitioned enclave execution is secure against chosen-image attacks (CIA). Given the attacker has the capability of querying the model under attack (victim model) to create image-IR pairs, can the attacker reconstruct the private input images? In this work we show that it is possible to carry out a black-box reconstruction attack by training a CNN based encoder-decoder architecture (reconstruction model) using image-IR pairs. Our tests show that the proposed reconstruction model achieves a 70% similarity between the original image and the reconstructed image.
KW - image reconstruction
KW - neural networks
KW - trusted execution environment
UR - http://www.scopus.com/inward/record.url?scp=85121649553&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85121649553&partnerID=8YFLogxK
U2 - 10.1145/3488659.3493779
DO - 10.1145/3488659.3493779
M3 - Conference contribution
AN - SCOPUS:85121649553
T3 - DistributedML 2021 - Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning, Part of CoNEXT 2021
SP - 29
EP - 35
BT - DistributedML 2021 - Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning, Part of CoNEXT 2021
PB - Association for Computing Machinery
T2 - 2nd ACM International Workshop on Distributed Machine Learning, DistributedML 2021, co-located with the 17th International Conference on emerging Networking EXperiments and Technologies, CoNEXT 2021
Y2 - 7 December 2021
ER -