TY - GEN
T1 - Smart malware that uses leaked control data of robotic applications
T2 - 22nd International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2019
AU - Chung, Keywhan
AU - Li, Xiao
AU - Tang, Peicheng
AU - Zhu, Zeran
AU - Kalbarczyk, Zbigniew T.
AU - Iyer, Ravishankar K.
AU - Kesavadas, Thenkurussi
N1 - Publisher Copyright:
© 2019 RAID 2019 Proceedings - 22nd International Symposium on Research in Attacks, Intrusions and Defenses. All rights reserved.
PY - 2019
Y1 - 2019
N2 - In this paper, we demonstrate a new type of threat that leverages machine learning techniques to maximize its impact. We use the Raven-II surgical robot and its haptic feedback rendering algorithm as an application. We exploit ROS vulnerabilities and implement smart self-learning malware that can track the movements of the robot’s arms and trigger the attack payload when the robot is in a critical stage of a (hypothetical) surgical procedure. By keeping the learning procedure internal to the malicious node that runs outside the physical components of the robotic application, an adversary can hide most of the malicious activities from security monitors that might be deployed in the system. Also, if an attack payload mimics an accidental failure, it is likely that the system administrator will fail to identify the malicious intention and will treat the attack as an accidental failure. After demonstrating the security threats, we devise methods (i.e., a safety engine) to protect the robotic system against the identified risk.
AB - In this paper, we demonstrate a new type of threat that leverages machine learning techniques to maximize its impact. We use the Raven-II surgical robot and its haptic feedback rendering algorithm as an application. We exploit ROS vulnerabilities and implement smart self-learning malware that can track the movements of the robot’s arms and trigger the attack payload when the robot is in a critical stage of a (hypothetical) surgical procedure. By keeping the learning procedure internal to the malicious node that runs outside the physical components of the robotic application, an adversary can hide most of the malicious activities from security monitors that might be deployed in the system. Also, if an attack payload mimics an accidental failure, it is likely that the system administrator will fail to identify the malicious intention and will treat the attack as an accidental failure. After demonstrating the security threats, we devise methods (i.e., a safety engine) to protect the robotic system against the identified risk.
UR - http://www.scopus.com/inward/record.url?scp=85077822237&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077822237&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85077822237
T3 - RAID 2019 Proceedings - 22nd International Symposium on Research in Attacks, Intrusions and Defenses
SP - 337
EP - 351
BT - RAID 2019 Proceedings - 22nd International Symposium on Research in Attacks, Intrusions and Defenses
PB - USENIX Association
Y2 - 23 September 2019 through 25 September 2019
ER -