TY - GEN
T1 - Property inference attacks on fully connected neural networks using permutation invariant representations
AU - Ganju, Karan
AU - Wang, Qi
AU - Yang, Wei
AU - Gunter, Carl A.
AU - Borisov, Nikita
N1 - Funding Information:
This work was supported in part by NSF CNS 13-30491, NSF CNS 14-08944, and DOE 0E0000780. The views expressed are those of the authors only. We appreciated valuable insights from our CCS reviewers and our shepherd, Giuseppe Ateniese.
Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/10/15
Y1 - 2018/10/15
N2 - With the growing adoption of machine learning, sharing of learned models is becoming popular. However, in addition to the prediction properties the model producer aims to share, there is also a risk that the model consumer can infer other properties of the training data the model producer did not intend to share. In this paper, we focus on the inference of global properties of the training data, such as the environment in which the data was produced, or the fraction of the data that comes from a certain class, as applied to white-box Fully Connected Neural Networks (FCNNs). Because of their complexity and inscrutability, FCNNs have a particularly high risk of leaking unexpected information about their training sets; at the same time, this complexity makes extracting this information challenging. We develop techniques that reduce this complexity by noting that FCNNs are invariant under permutation of nodes in each layer. We develop our techniques using representations that capture this invariance and simplify the information extraction task. We evaluate our techniques on several synthetic and standard benchmark datasets and show that they are very effective at inferring various data properties. We also perform two case studies to demonstrate the impact of our attack. In the first case study we show that a classifier that recognizes smiling faces also leaks information about the relative attractiveness of the individuals in its training set. In the second case study we show that a classifier that recognizes Bitcoin mining from performance counters also leaks information about whether the classifier was trained on logs from machines that were patched for the Meltdown and Spectre attacks.
AB - With the growing adoption of machine learning, sharing of learned models is becoming popular. However, in addition to the prediction properties the model producer aims to share, there is also a risk that the model consumer can infer other properties of the training data the model producer did not intend to share. In this paper, we focus on the inference of global properties of the training data, such as the environment in which the data was produced, or the fraction of the data that comes from a certain class, as applied to white-box Fully Connected Neural Networks (FCNNs). Because of their complexity and inscrutability, FCNNs have a particularly high risk of leaking unexpected information about their training sets; at the same time, this complexity makes extracting this information challenging. We develop techniques that reduce this complexity by noting that FCNNs are invariant under permutation of nodes in each layer. We develop our techniques using representations that capture this invariance and simplify the information extraction task. We evaluate our techniques on several synthetic and standard benchmark datasets and show that they are very effective at inferring various data properties. We also perform two case studies to demonstrate the impact of our attack. In the first case study we show that a classifier that recognizes smiling faces also leaks information about the relative attractiveness of the individuals in its training set. In the second case study we show that a classifier that recognizes Bitcoin mining from performance counters also leaks information about whether the classifier was trained on logs from machines that were patched for the Meltdown and Spectre attacks.
KW - Neural networks
KW - Permutation equivalence
KW - Property inference
UR - http://www.scopus.com/inward/record.url?scp=85056873127&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056873127&partnerID=8YFLogxK
U2 - 10.1145/3243734.3243834
DO - 10.1145/3243734.3243834
M3 - Conference contribution
AN - SCOPUS:85056873127
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 619
EP - 633
BT - CCS 2018 - Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
T2 - 25th ACM Conference on Computer and Communications Security, CCS 2018
Y2 - 15 October 2018
ER -