Sampling bounds for stochastic optimization

Moses Charikar, Chandra Chekuri, Martin Pál

Research output: Contribution to journalConference articlepeer-review

Abstract

A large class of stochastic optimization problems can be modeled as minimizing an objective function f that depends on a choice of a vector x ε X, as well as on a random external parameter ω ε Ω given by a probability distribution π. The value of the objective function is a random variable and often the goal is to find an x ε X to minimize the expected cost Eω[fω(x)]. Each ω is referred to as a scenario. We consider the case when Ω is large or infinite and we are allowed to sample from π in a black-box fashion. A common method, known as the SAA method (sample average approximation), is to pick sufficiently many independent samples from π and use them to approximate π and correspondingly Eω[fω(x)]. This is one of several scenario reduction methods used in practice. There has been substantial recent interest in two-stage stochastic versions of combinatorial optimization problems which can be modeled by the framework described above. In particular, we are interested in the model where a parameter λ bounds the relative factor by which costs increase if decisions are delayed to the second stage. Although the SAA method has been widely analyzed, the known bounds on the number of samples required for a (1 + ε) approximation depend on the variance of π even when λ is assumed to be a fixed constant. Shmoys and Swamy [13, 14] proved that a polynomial number of samples suffice when f can be modeled as a linear or convex program. They used modifications to the ellipsoid method to prove this. In this paper we give a different proof, based on earlier methods of Kleywegt, Shapiro, Homem-De-Mello [6] and others, that a polynomial number of samples suffice for the SAA method. Our proof is not based on computational properties of f and hence also applies to integer programs. We further show that small variations of the SAA method suffice to obtain a bound on the sample size even when we have only an approximation algorithm to solve the sampled problem. We are thus able to extend a number of algorithms designed for the case when π is given explicitly to the case when π is given as a black-box sampling oracle.

Original languageEnglish (US)
Pages (from-to)257-269
Number of pages13
JournalLecture Notes in Computer Science
Volume3624
DOIs
StatePublished - 2005
Externally publishedYes
Event8th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2005 and 9th International Workshop on Randomization and Computation, RANDOM 2005 - Berkeley, CA, United States
Duration: Aug 22 2005Aug 24 2005

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Sampling bounds for stochastic optimization'. Together they form a unique fingerprint.

Cite this