TY - GEN
T1 - How to Cover up Anomalous Accesses to Electronic Health Records
AU - Xu, Xiaojun
AU - Hao, Qingying
AU - Yang, Zhuolin
AU - Li, Bo
AU - Liebovitz, David
AU - Wang, Gang
AU - Gunter, Carl A.
N1 - Publisher Copyright:
© USENIX Security 2023. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Illegitimate access detection systems in hospital logs perform post hoc detection instead of runtime access restriction to allow widespread access in emergencies. We study the effectiveness of adversarial machine learning strategies against such detection systems on a large-scale dataset consisting of a year of access logs at a major hospital. We study a range of graph-based anomaly detection systems, including heuristic-based and Graph Neural Network (GNN)-based models. We find that evasion attacks, in which covering accesses (that is, accesses made to disguise a target access) are injected during evaluation period of the target access, can successfully fool the detection system. We also show that such evasion attacks can transfer among different detection algorithms. On the other hand, we find that poisoning attacks, in which adversaries inject covering accesses during the training phase of the model, do not effectively mislead the trained detection system unless the attacker is given unrealistic capabilities such as injecting over 10,000 accesses or imposing a high weight on the covering accesses in the training algorithm. To examine the generalizability of the results, we also apply our attack against a state-of-the-art detection model on the LANL network lateral movement dataset, and observe similar conclusions.
AB - Illegitimate access detection systems in hospital logs perform post hoc detection instead of runtime access restriction to allow widespread access in emergencies. We study the effectiveness of adversarial machine learning strategies against such detection systems on a large-scale dataset consisting of a year of access logs at a major hospital. We study a range of graph-based anomaly detection systems, including heuristic-based and Graph Neural Network (GNN)-based models. We find that evasion attacks, in which covering accesses (that is, accesses made to disguise a target access) are injected during evaluation period of the target access, can successfully fool the detection system. We also show that such evasion attacks can transfer among different detection algorithms. On the other hand, we find that poisoning attacks, in which adversaries inject covering accesses during the training phase of the model, do not effectively mislead the trained detection system unless the attacker is given unrealistic capabilities such as injecting over 10,000 accesses or imposing a high weight on the covering accesses in the training algorithm. To examine the generalizability of the results, we also apply our attack against a state-of-the-art detection model on the LANL network lateral movement dataset, and observe similar conclusions.
UR - http://www.scopus.com/inward/record.url?scp=85176123086&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85176123086&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85176123086
T3 - 32nd USENIX Security Symposium, USENIX Security 2023
SP - 229
EP - 246
BT - 32nd USENIX Security Symposium, USENIX Security 2023
PB - USENIX Association
T2 - 32nd USENIX Security Symposium, USENIX Security 2023
Y2 - 9 August 2023 through 11 August 2023
ER -