A Dynamic Games Approach to Controller Design: Disturbance Rejection in Discrete-Time

Research output: Contribution to journalArticle

Abstract

We show that the discrete-time disturbance rejection problem, formulated in finite and infinite horizons, and under perfect state measurements, can be solved by making direct use of some results on linear-quadratic zero-sum dynamic games. For the finite-horizon problem an optimal (minimax) controller exists (in contrast with the continuous-time H∞ control problem), and can be expressed in terms of a generalized (time-varying) discrete-time Riccati equation. The existence of an optimum also holds in the infinite-horizon case, under an appropriate observability condition, with the optimal control, given in terms of a generalized algebraic Riccati equation, also being stabilizing. In both cases, the corresponding worst-case disturbances turn out to be correlated random sequences with discrete distributions, which means that the problem (viewed as a dynamic game between the controller and the disturbance) does not admit a pure-strategy saddle point. The paper also presents results for the delayed state measurement and the nonzero initial state cases. Furthermore, it formulates a stochastic version of the problem, where the disturbance is a partially stochastic process with fixed higher order moments (other than the mean). In this case, the minimax controller depends on the energy bound of the disturbance, provided that it is below a certain threshold. Several numerical studies included in the paper illustrate the main results.

Original languageEnglish (US)
Pages (from-to)936-952
Number of pages17
JournalIEEE Transactions on Automatic Control
Volume36
Issue number8
DOIs
StatePublished - Aug 1991

Fingerprint

Disturbance rejection
Riccati equations
Controllers
Observability
Random processes

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

A Dynamic Games Approach to Controller Design : Disturbance Rejection in Discrete-Time. / Basar, M Tamer.

In: IEEE Transactions on Automatic Control, Vol. 36, No. 8, 08.1991, p. 936-952.

Research output: Contribution to journalArticle

@article{f5d95793800f4f86983137c99283f223,
title = "A Dynamic Games Approach to Controller Design: Disturbance Rejection in Discrete-Time",
abstract = "We show that the discrete-time disturbance rejection problem, formulated in finite and infinite horizons, and under perfect state measurements, can be solved by making direct use of some results on linear-quadratic zero-sum dynamic games. For the finite-horizon problem an optimal (minimax) controller exists (in contrast with the continuous-time H∞ control problem), and can be expressed in terms of a generalized (time-varying) discrete-time Riccati equation. The existence of an optimum also holds in the infinite-horizon case, under an appropriate observability condition, with the optimal control, given in terms of a generalized algebraic Riccati equation, also being stabilizing. In both cases, the corresponding worst-case disturbances turn out to be correlated random sequences with discrete distributions, which means that the problem (viewed as a dynamic game between the controller and the disturbance) does not admit a pure-strategy saddle point. The paper also presents results for the delayed state measurement and the nonzero initial state cases. Furthermore, it formulates a stochastic version of the problem, where the disturbance is a partially stochastic process with fixed higher order moments (other than the mean). In this case, the minimax controller depends on the energy bound of the disturbance, provided that it is below a certain threshold. Several numerical studies included in the paper illustrate the main results.",
author = "Basar, {M Tamer}",
year = "1991",
month = "8",
doi = "10.1109/9.133187",
language = "English (US)",
volume = "36",
pages = "936--952",
journal = "IEEE Transactions on Automatic Control",
issn = "0018-9286",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "8",

}

TY - JOUR

T1 - A Dynamic Games Approach to Controller Design

T2 - Disturbance Rejection in Discrete-Time

AU - Basar, M Tamer

PY - 1991/8

Y1 - 1991/8

N2 - We show that the discrete-time disturbance rejection problem, formulated in finite and infinite horizons, and under perfect state measurements, can be solved by making direct use of some results on linear-quadratic zero-sum dynamic games. For the finite-horizon problem an optimal (minimax) controller exists (in contrast with the continuous-time H∞ control problem), and can be expressed in terms of a generalized (time-varying) discrete-time Riccati equation. The existence of an optimum also holds in the infinite-horizon case, under an appropriate observability condition, with the optimal control, given in terms of a generalized algebraic Riccati equation, also being stabilizing. In both cases, the corresponding worst-case disturbances turn out to be correlated random sequences with discrete distributions, which means that the problem (viewed as a dynamic game between the controller and the disturbance) does not admit a pure-strategy saddle point. The paper also presents results for the delayed state measurement and the nonzero initial state cases. Furthermore, it formulates a stochastic version of the problem, where the disturbance is a partially stochastic process with fixed higher order moments (other than the mean). In this case, the minimax controller depends on the energy bound of the disturbance, provided that it is below a certain threshold. Several numerical studies included in the paper illustrate the main results.

AB - We show that the discrete-time disturbance rejection problem, formulated in finite and infinite horizons, and under perfect state measurements, can be solved by making direct use of some results on linear-quadratic zero-sum dynamic games. For the finite-horizon problem an optimal (minimax) controller exists (in contrast with the continuous-time H∞ control problem), and can be expressed in terms of a generalized (time-varying) discrete-time Riccati equation. The existence of an optimum also holds in the infinite-horizon case, under an appropriate observability condition, with the optimal control, given in terms of a generalized algebraic Riccati equation, also being stabilizing. In both cases, the corresponding worst-case disturbances turn out to be correlated random sequences with discrete distributions, which means that the problem (viewed as a dynamic game between the controller and the disturbance) does not admit a pure-strategy saddle point. The paper also presents results for the delayed state measurement and the nonzero initial state cases. Furthermore, it formulates a stochastic version of the problem, where the disturbance is a partially stochastic process with fixed higher order moments (other than the mean). In this case, the minimax controller depends on the energy bound of the disturbance, provided that it is below a certain threshold. Several numerical studies included in the paper illustrate the main results.

UR - http://www.scopus.com/inward/record.url?scp=0026205128&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026205128&partnerID=8YFLogxK

U2 - 10.1109/9.133187

DO - 10.1109/9.133187

M3 - Article

AN - SCOPUS:0026205128

VL - 36

SP - 936

EP - 952

JO - IEEE Transactions on Automatic Control

JF - IEEE Transactions on Automatic Control

SN - 0018-9286

IS - 8

ER -