Least inferable policies for markov decision processes

Mustafa O. Karabag, Melkior Ornik, Ufuk Topcu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.

Original languageEnglish (US)
Title of host publication2019 American Control Conference, ACC 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1224-1231
Number of pages8
ISBN (Electronic)9781538679265
StatePublished - Jul 2019
Event2019 American Control Conference, ACC 2019 - Philadelphia, United States
Duration: Jul 10 2019Jul 12 2019

Publication series

NameProceedings of the American Control Conference
Volume2019-July
ISSN (Print)0743-1619

Conference

Conference2019 American Control Conference, ACC 2019
CountryUnited States
CityPhiladelphia
Period7/10/197/12/19

Fingerprint

Convex optimization
Error analysis

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Karabag, M. O., Ornik, M., & Topcu, U. (2019). Least inferable policies for markov decision processes. In 2019 American Control Conference, ACC 2019 (pp. 1224-1231). [8815129] (Proceedings of the American Control Conference; Vol. 2019-July). Institute of Electrical and Electronics Engineers Inc..

Least inferable policies for markov decision processes. / Karabag, Mustafa O.; Ornik, Melkior; Topcu, Ufuk.

2019 American Control Conference, ACC 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 1224-1231 8815129 (Proceedings of the American Control Conference; Vol. 2019-July).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Karabag, MO, Ornik, M & Topcu, U 2019, Least inferable policies for markov decision processes. in 2019 American Control Conference, ACC 2019., 8815129, Proceedings of the American Control Conference, vol. 2019-July, Institute of Electrical and Electronics Engineers Inc., pp. 1224-1231, 2019 American Control Conference, ACC 2019, Philadelphia, United States, 7/10/19.
Karabag MO, Ornik M, Topcu U. Least inferable policies for markov decision processes. In 2019 American Control Conference, ACC 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 1224-1231. 8815129. (Proceedings of the American Control Conference).
Karabag, Mustafa O. ; Ornik, Melkior ; Topcu, Ufuk. / Least inferable policies for markov decision processes. 2019 American Control Conference, ACC 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 1224-1231 (Proceedings of the American Control Conference).
@inproceedings{4e656923a3a245e08edc1ee88aebc755,
title = "Least inferable policies for markov decision processes",
abstract = "In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.",
author = "Karabag, {Mustafa O.} and Melkior Ornik and Ufuk Topcu",
year = "2019",
month = "7",
language = "English (US)",
series = "Proceedings of the American Control Conference",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1224--1231",
booktitle = "2019 American Control Conference, ACC 2019",
address = "United States",

}

TY - GEN

T1 - Least inferable policies for markov decision processes

AU - Karabag, Mustafa O.

AU - Ornik, Melkior

AU - Topcu, Ufuk

PY - 2019/7

Y1 - 2019/7

N2 - In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.

AB - In a variety of applications, an agent's success depends on the knowledge that an adversarial observer has or can gather about the agent's decisions. It is therefore desirable for the agent to achieve a task while reducing the ability of an observer to infer the agent's policy. We consider the task of the agent as a reachability problem in a Markov decision process and study the synthesis of policies that minimize the observer's ability to infer the transition probabilities of the agent between the states of the Markov decision process. We introduce a metric that is based on the Fisher information as a proxy for the information leaked to the observer and using this metric formulate a problem that minimizes expected total information subject to the reachability constraint. We proceed to solve the problem using convex optimization methods. To verify the proposed method, we analyze the relationship between the expected total information and the estimation error of the observer, and show that, for a particular class of Markov decision processes, these two values are inversely proportional.

UR - http://www.scopus.com/inward/record.url?scp=85072277854&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072277854&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85072277854

T3 - Proceedings of the American Control Conference

SP - 1224

EP - 1231

BT - 2019 American Control Conference, ACC 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -