TY - GEN
T1 - Deceptive Trajectory Imitation Using Affine Feedback
AU - Ornik, Melkior
N1 - Publisher Copyright:
© 2022 American Automatic Control Council.
PY - 2022
Y1 - 2022
N2 - In adversarial environments it is often beneficial for an agent to reach its objective while seeking to create the impression that it is progressing towards a different goal. This paper considers the setting where an agent seeks to tightly follow a public reference trajectory during the observation period, while actually using an affine feedback controller to ultimately guide the system towards a hidden objective. We pose the optimal synthesis of this affine controller as a nonlinear constrained optimization problem. Taking the sum of norms of trajectory deviations over time as the cost function, by using the power mean inequality we provide an approximation of the optimal controller as a solution of an ordinary least squares problem. We use a method inspired by Tikhonov regularization to ensure that the controlled trajectory converges to the intended objective. We illustrate our method on a variety of numerical examples, showing that the proposed method often generates trajectories that are nearly indistinguishable from the reference during the observation period, and identify some fundamental limits of trajectory imitation using affine feedback.
AB - In adversarial environments it is often beneficial for an agent to reach its objective while seeking to create the impression that it is progressing towards a different goal. This paper considers the setting where an agent seeks to tightly follow a public reference trajectory during the observation period, while actually using an affine feedback controller to ultimately guide the system towards a hidden objective. We pose the optimal synthesis of this affine controller as a nonlinear constrained optimization problem. Taking the sum of norms of trajectory deviations over time as the cost function, by using the power mean inequality we provide an approximation of the optimal controller as a solution of an ordinary least squares problem. We use a method inspired by Tikhonov regularization to ensure that the controlled trajectory converges to the intended objective. We illustrate our method on a variety of numerical examples, showing that the proposed method often generates trajectories that are nearly indistinguishable from the reference during the observation period, and identify some fundamental limits of trajectory imitation using affine feedback.
UR - http://www.scopus.com/inward/record.url?scp=85138493141&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138493141&partnerID=8YFLogxK
U2 - 10.23919/ACC53348.2022.9867275
DO - 10.23919/ACC53348.2022.9867275
M3 - Conference contribution
AN - SCOPUS:85138493141
T3 - Proceedings of the American Control Conference
SP - 5211
EP - 5216
BT - 2022 American Control Conference, ACC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 American Control Conference, ACC 2022
Y2 - 8 June 2022 through 10 June 2022
ER -