Deception in Supervisory Control

Mustafa O. Karabag, Melkior Ornik, Ufuk Topcu

Research output: Contribution to journalArticlepeer-review


The use of deceptive strategies is important for an agent that attempts not to reveal his intentions in an adversarial environment. We consider a setting in which a supervisor provides a reference policy and expects an agent to follow the reference policy and perform a task. The agent may instead follow a different, deceptive policy to achieve a different task. We model the environment and the behavior of the agent with a Markov decision process, represent the tasks of the agent and the supervisor with reachability specifications, and study the synthesis of optimal deceptive policies for such agents. We also study the synthesis of optimal reference policies that prevent deceptive strategies of the agent and achieves the supervisor's task with high probability. We show that the synthesis of optimal deceptive policies has a convex optimization problem formulation, while the synthesis of optimal reference policies requires solving a nonconvex optimization problem. We also show that the synthesis of optimal reference policies is NP-hard.

Original languageEnglish (US)
JournalIEEE Transactions on Automatic Control
StateAccepted/In press - 2021


  • computational complexity
  • Convex functions
  • deception
  • Hidden Markov models
  • Markov decision processes
  • Markov processes
  • Optimization
  • Probabilistic logic
  • supervisory control
  • Supervisory control
  • Task analysis

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Deception in Supervisory Control'. Together they form a unique fingerprint.

Cite this