### Abstract

We consider in this paper a class of stochastic nonlinear systems in strict feedback form, where in addition to the standard Wiener process there is a norm-bounded unknown disturbance driving the system. The bound on the disturbance is in the form of an upper bound on its power in terms of the power of the output. Within this structure, we seek a minimax state-feedback controller, namely one that minimizes over all state-feedback controllers the maximum of a given class of integral costs, where the choice of the specific cost function is also part of the design problem as in inverse optimality. We derive the minimax controller by first converting the original constrained optimization problem into an unconstrained one (a stochastic differential game) and then making use of the duality relationship between stochastic games and risk-sensitive stochastic control. The state-feedback control law obtained is absolutely stabilizing. Moreover, it is both locally optimal and globally inverse optimal, where the first feature implies that a linearized version of the controller solves a linear quadratic risk-sensitive control problem, and the second feature says that there exists an appropriate cost function according to which the controller is optimal.

Original language | English (US) |
---|---|

Pages (from-to) | 1065-1070 |

Number of pages | 6 |

Journal | Proceedings of the IEEE Conference on Decision and Control |

Volume | 1 |

State | Published - Dec 1 2003 |

Event | 42nd IEEE Conference on Decision and Control - Maui, HI, United States Duration: Dec 9 2003 → Dec 12 2003 |

### Fingerprint

### ASJC Scopus subject areas

- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization

### Cite this

**Minimax Nonlinear Control under Stochastic Uncertainty Constraints.** / Tang, Cheng; Başar, Tamer.

Research output: Contribution to journal › Conference article

*Proceedings of the IEEE Conference on Decision and Control*, vol. 1, pp. 1065-1070.

}

TY - JOUR

T1 - Minimax Nonlinear Control under Stochastic Uncertainty Constraints

AU - Tang, Cheng

AU - Başar, Tamer

PY - 2003/12/1

Y1 - 2003/12/1

N2 - We consider in this paper a class of stochastic nonlinear systems in strict feedback form, where in addition to the standard Wiener process there is a norm-bounded unknown disturbance driving the system. The bound on the disturbance is in the form of an upper bound on its power in terms of the power of the output. Within this structure, we seek a minimax state-feedback controller, namely one that minimizes over all state-feedback controllers the maximum of a given class of integral costs, where the choice of the specific cost function is also part of the design problem as in inverse optimality. We derive the minimax controller by first converting the original constrained optimization problem into an unconstrained one (a stochastic differential game) and then making use of the duality relationship between stochastic games and risk-sensitive stochastic control. The state-feedback control law obtained is absolutely stabilizing. Moreover, it is both locally optimal and globally inverse optimal, where the first feature implies that a linearized version of the controller solves a linear quadratic risk-sensitive control problem, and the second feature says that there exists an appropriate cost function according to which the controller is optimal.

AB - We consider in this paper a class of stochastic nonlinear systems in strict feedback form, where in addition to the standard Wiener process there is a norm-bounded unknown disturbance driving the system. The bound on the disturbance is in the form of an upper bound on its power in terms of the power of the output. Within this structure, we seek a minimax state-feedback controller, namely one that minimizes over all state-feedback controllers the maximum of a given class of integral costs, where the choice of the specific cost function is also part of the design problem as in inverse optimality. We derive the minimax controller by first converting the original constrained optimization problem into an unconstrained one (a stochastic differential game) and then making use of the duality relationship between stochastic games and risk-sensitive stochastic control. The state-feedback control law obtained is absolutely stabilizing. Moreover, it is both locally optimal and globally inverse optimal, where the first feature implies that a linearized version of the controller solves a linear quadratic risk-sensitive control problem, and the second feature says that there exists an appropriate cost function according to which the controller is optimal.

UR - http://www.scopus.com/inward/record.url?scp=1542329083&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=1542329083&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:1542329083

VL - 1

SP - 1065

EP - 1070

JO - Proceedings of the IEEE Conference on Decision and Control

JF - Proceedings of the IEEE Conference on Decision and Control

SN - 0191-2216

ER -