TY - JOUR
T1 - Persuasion-Based Robust Sensor Design against Attackers with Unknown Control Objectives
AU - Sayin, Muhammed O.
AU - Basar, Tamer
N1 - Manuscript received December 29, 2019; revised May 26, 2020; accepted October 4, 2020. Date of publication October 13, 2020; date of current version September 27, 2021. This work was supported in part by the U.S. Office of Naval Research (ONR) MURI under Grant N00014-16-1-2710, and in part by the U.S. Army Research Office (ARO) MURI under Grant AG285. Recommended by Associate Editor R. M. Jungers. (Corresponding author: Muhammed O. Sayin.) Muhammed O. Sayin is with the Laboratory for Information, and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139 USA (e-mail: [email protected]).
PY - 2021/10
Y1 - 2021/10
N2 - We introduce a robust sensor design framework to provide 'persuasion-based' defense in stochastic control systems against an unknown type attacker with a control objective exclusive to its type. We design a robust 'linear-plus-noise' signaling strategy in order to persuade the attacker to take actions that lead to minimum damage with respect to the system's objective. The specific model we adopt is a Gauss-Markov process driven by a controller with a (partially) 'unknown' malicious/benign control objective. We seek to defend against the worst possible distribution over control objectives in a robust way under the solution concept of Stackelberg equilibrium, where the sensor is the leader. We show that a necessary and sufficient condition on the covariance matrix of the posterior belief is a certain linear matrix inequality. This enables us to formulate an equivalent tractable problem, indeed a semidefinite program, to compute the robust sensor design strategies 'globally' even though the original optimization problem is nonconvex and highly nonlinear. We also extend this result to scenarios where the sensor makes noisy or partial measurements.
AB - We introduce a robust sensor design framework to provide 'persuasion-based' defense in stochastic control systems against an unknown type attacker with a control objective exclusive to its type. We design a robust 'linear-plus-noise' signaling strategy in order to persuade the attacker to take actions that lead to minimum damage with respect to the system's objective. The specific model we adopt is a Gauss-Markov process driven by a controller with a (partially) 'unknown' malicious/benign control objective. We seek to defend against the worst possible distribution over control objectives in a robust way under the solution concept of Stackelberg equilibrium, where the sensor is the leader. We show that a necessary and sufficient condition on the covariance matrix of the posterior belief is a certain linear matrix inequality. This enables us to formulate an equivalent tractable problem, indeed a semidefinite program, to compute the robust sensor design strategies 'globally' even though the original optimization problem is nonconvex and highly nonlinear. We also extend this result to scenarios where the sensor makes noisy or partial measurements.
KW - Security
KW - Stackelberg games
KW - semidefinite programming (SDP)
KW - sensor placement
KW - stochastic control
UR - http://www.scopus.com/inward/record.url?scp=85115923498&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115923498&partnerID=8YFLogxK
U2 - 10.1109/TAC.2020.3030861
DO - 10.1109/TAC.2020.3030861
M3 - Article
AN - SCOPUS:85115923498
SN - 0018-9286
VL - 66
SP - 4589
EP - 4603
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
IS - 10
ER -