Deceptive Trajectory Imitation Using Affine Feedback

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In adversarial environments it is often beneficial for an agent to reach its objective while seeking to create the impression that it is progressing towards a different goal. This paper considers the setting where an agent seeks to tightly follow a public reference trajectory during the observation period, while actually using an affine feedback controller to ultimately guide the system towards a hidden objective. We pose the optimal synthesis of this affine controller as a nonlinear constrained optimization problem. Taking the sum of norms of trajectory deviations over time as the cost function, by using the power mean inequality we provide an approximation of the optimal controller as a solution of an ordinary least squares problem. We use a method inspired by Tikhonov regularization to ensure that the controlled trajectory converges to the intended objective. We illustrate our method on a variety of numerical examples, showing that the proposed method often generates trajectories that are nearly indistinguishable from the reference during the observation period, and identify some fundamental limits of trajectory imitation using affine feedback.

Original languageEnglish (US)
Title of host publication2022 American Control Conference, ACC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5211-5216
Number of pages6
ISBN (Electronic)9781665451963
DOIs
StatePublished - 2022
Event2022 American Control Conference, ACC 2022 - Atlanta, United States
Duration: Jun 8 2022Jun 10 2022

Publication series

NameProceedings of the American Control Conference
Volume2022-June
ISSN (Print)0743-1619

Conference

Conference2022 American Control Conference, ACC 2022
Country/TerritoryUnited States
CityAtlanta
Period6/8/226/10/22

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Deceptive Trajectory Imitation Using Affine Feedback'. Together they form a unique fingerprint.

Cite this