Abstract
We study a first-order primal-dual subgradient method to optimize risk-constrained risk-penalized optimization problems, where risk is modeled via the popular conditional value at risk (CVaR) measure. The algorithm processes independent and identically distributed samples from the underlying uncertainty in an online fashion and produces an η/K-approximately feasible and η/K-approximately optimal point within K iterations with constant step-size, where η increases with tunable risk-parameters of CVaR. We find optimized step sizes using our bounds and precisely characterize the computational cost of risk aversion as revealed by the growth in η. Our proposed algorithm makes a simple modification to a typical primal-dual stochastic subgradient algorithm. With this mild change, our analysis surprisingly obviates the need to impose a priori bounds or complex adaptive bounding schemes for dual variables to execute the algorithm as assumed in many prior works. We also draw interesting parallels in sample complexity with that for chance-constrained programs derived in the literature with a very different solution architecture.
Original language | English (US) |
---|---|
Pages (from-to) | 428-460 |
Number of pages | 33 |
Journal | Journal of Optimization Theory and Applications |
Volume | 190 |
Issue number | 2 |
Early online date | Jun 24 2021 |
DOIs | |
State | Published - Aug 2021 |
Keywords
- Conditional value at risk
- Primal-dual optimization
- Risk-sensitive optimization
- Stochastic optimization
ASJC Scopus subject areas
- Management Science and Operations Research
- Control and Optimization
- Applied Mathematics