### Abstract

In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.

Original language | English (US) |
---|---|

Title of host publication | 2019 American Control Conference, ACC 2019 |

Publisher | Institute of Electrical and Electronics Engineers Inc. |

Pages | 1224-1231 |

Number of pages | 8 |

ISBN (Electronic) | 9781538679265 |

State | Published - Jul 2019 |

Event | 2019 American Control Conference, ACC 2019 - Philadelphia, United States Duration: Jul 10 2019 → Jul 12 2019 |

### Publication series

Name | Proceedings of the American Control Conference |
---|---|

Volume | 2019-July |

ISSN (Print) | 0743-1619 |

### Conference

Conference | 2019 American Control Conference, ACC 2019 |
---|---|

Country | United States |

City | Philadelphia |

Period | 7/10/19 → 7/12/19 |

### Fingerprint

### ASJC Scopus subject areas

- Electrical and Electronic Engineering

### Cite this

*2019 American Control Conference, ACC 2019*(pp. 1224-1231). [8815129] (Proceedings of the American Control Conference; Vol. 2019-July). Institute of Electrical and Electronics Engineers Inc..

**Least inferable policies for markov decision processes.** / Karabag, Mustafa O.; Ornik, Melkior; Topcu, Ufuk.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2019 American Control Conference, ACC 2019.*, 8815129, Proceedings of the American Control Conference, vol. 2019-July, Institute of Electrical and Electronics Engineers Inc., pp. 1224-1231, 2019 American Control Conference, ACC 2019, Philadelphia, United States, 7/10/19.

}

TY - GEN

T1 - Least inferable policies for markov decision processes

AU - Karabag, Mustafa O.

AU - Ornik, Melkior

AU - Topcu, Ufuk

PY - 2019/7

Y1 - 2019/7

N2 - In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.

AB - In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.

UR - http://www.scopus.com/inward/record.url?scp=85072277854&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072277854&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85072277854

T3 - Proceedings of the American Control Conference

SP - 1224

EP - 1231

BT - 2019 American Control Conference, ACC 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -