Asymptotic optimality of a greedy randomized algorithm in a large-scale service system with general packing constraints

Research output: Contribution to journalArticle

Abstract

We consider a service system model primarily motivated by the problem of efficient assignment of virtual machines to physical host machines in a network cloud, so that the number of occupied hosts is minimized.There are multiple types of arriving customers, where a customer’s mean service time depends on its type. There is an infinite number of servers. Multiple customers can be placed for service into one server, subject to general “packing” constraints. Service times of different customers are independent, even if served simultaneously by the same server. Each new arriving customer is placed for service immediately, either into a server already serving other customers (as long as packing constraints are not violated) or into an idle server. After a service completion, each customer leaves its server and the system. We propose an extremely simple and easily implementable customer placement algorithm, called Greedy-Random (GRAND). It places each arriving customer uniformly at random into either one of the already occupied servers (subject to packing constraints) or one of the so-called zero-servers, which are empty servers designated to be available to new arrivals. One instance of GRAND, called GRAND($$aZ$$aZ), where $$a\ge 0$$a≥0 is a parameter, is such that the number of zero-servers at any given time $$t$$t is $$aZ(t)$$aZ(t), where $$Z(t)$$Z(t) is the current total number of customers in the system. We prove that GRAND($$aZ$$aZ) with $$a>0$$a>0 is asymptotically optimal, as the customer arrival rates grow to infinity and $$a\rightarrow 0$$a→0, in the sense of minimizing the total number of occupied servers in steady state. In addition, we study by simulations various versions of GRAND and observe the dependence of convergence speed and steady-state performance on the number of zero-servers.

Original languageEnglish (US)
Pages (from-to)117-143
Number of pages27
JournalQueueing Systems
Volume79
Issue number2
DOIs
StatePublished - Feb 1 2015

Fingerprint

Servers
Service system
Asymptotic optimality

Keywords

  • Cloud computing
  • Fluid limit
  • Greedy random algorithm
  • Infinite server system
  • Queueing networks
  • Stochastic bin packing
  • Virtual machine

ASJC Scopus subject areas

  • Statistics and Probability
  • Computer Science Applications
  • Management Science and Operations Research
  • Computational Theory and Mathematics

Cite this

Asymptotic optimality of a greedy randomized algorithm in a large-scale service system with general packing constraints. / Stolyar, Alexander L.; Zhong, Yuan.

In: Queueing Systems, Vol. 79, No. 2, 01.02.2015, p. 117-143.

Research output: Contribution to journalArticle

@article{83f0995a3bfa4236bf1d2d6abcdf9a40,
title = "Asymptotic optimality of a greedy randomized algorithm in a large-scale service system with general packing constraints",
abstract = "We consider a service system model primarily motivated by the problem of efficient assignment of virtual machines to physical host machines in a network cloud, so that the number of occupied hosts is minimized.There are multiple types of arriving customers, where a customer’s mean service time depends on its type. There is an infinite number of servers. Multiple customers can be placed for service into one server, subject to general “packing” constraints. Service times of different customers are independent, even if served simultaneously by the same server. Each new arriving customer is placed for service immediately, either into a server already serving other customers (as long as packing constraints are not violated) or into an idle server. After a service completion, each customer leaves its server and the system. We propose an extremely simple and easily implementable customer placement algorithm, called Greedy-Random (GRAND). It places each arriving customer uniformly at random into either one of the already occupied servers (subject to packing constraints) or one of the so-called zero-servers, which are empty servers designated to be available to new arrivals. One instance of GRAND, called GRAND($$aZ$$aZ), where $$a\ge 0$$a≥0 is a parameter, is such that the number of zero-servers at any given time $$t$$t is $$aZ(t)$$aZ(t), where $$Z(t)$$Z(t) is the current total number of customers in the system. We prove that GRAND($$aZ$$aZ) with $$a>0$$a>0 is asymptotically optimal, as the customer arrival rates grow to infinity and $$a\rightarrow 0$$a→0, in the sense of minimizing the total number of occupied servers in steady state. In addition, we study by simulations various versions of GRAND and observe the dependence of convergence speed and steady-state performance on the number of zero-servers.",
keywords = "Cloud computing, Fluid limit, Greedy random algorithm, Infinite server system, Queueing networks, Stochastic bin packing, Virtual machine",
author = "Stolyar, {Alexander L.} and Yuan Zhong",
year = "2015",
month = "2",
day = "1",
doi = "10.1007/s11134-014-9414-x",
language = "English (US)",
volume = "79",
pages = "117--143",
journal = "Queueing Systems",
issn = "0257-0130",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - Asymptotic optimality of a greedy randomized algorithm in a large-scale service system with general packing constraints

AU - Stolyar, Alexander L.

AU - Zhong, Yuan

PY - 2015/2/1

Y1 - 2015/2/1

N2 - We consider a service system model primarily motivated by the problem of efficient assignment of virtual machines to physical host machines in a network cloud, so that the number of occupied hosts is minimized.There are multiple types of arriving customers, where a customer’s mean service time depends on its type. There is an infinite number of servers. Multiple customers can be placed for service into one server, subject to general “packing” constraints. Service times of different customers are independent, even if served simultaneously by the same server. Each new arriving customer is placed for service immediately, either into a server already serving other customers (as long as packing constraints are not violated) or into an idle server. After a service completion, each customer leaves its server and the system. We propose an extremely simple and easily implementable customer placement algorithm, called Greedy-Random (GRAND). It places each arriving customer uniformly at random into either one of the already occupied servers (subject to packing constraints) or one of the so-called zero-servers, which are empty servers designated to be available to new arrivals. One instance of GRAND, called GRAND($$aZ$$aZ), where $$a\ge 0$$a≥0 is a parameter, is such that the number of zero-servers at any given time $$t$$t is $$aZ(t)$$aZ(t), where $$Z(t)$$Z(t) is the current total number of customers in the system. We prove that GRAND($$aZ$$aZ) with $$a>0$$a>0 is asymptotically optimal, as the customer arrival rates grow to infinity and $$a\rightarrow 0$$a→0, in the sense of minimizing the total number of occupied servers in steady state. In addition, we study by simulations various versions of GRAND and observe the dependence of convergence speed and steady-state performance on the number of zero-servers.

AB - We consider a service system model primarily motivated by the problem of efficient assignment of virtual machines to physical host machines in a network cloud, so that the number of occupied hosts is minimized.There are multiple types of arriving customers, where a customer’s mean service time depends on its type. There is an infinite number of servers. Multiple customers can be placed for service into one server, subject to general “packing” constraints. Service times of different customers are independent, even if served simultaneously by the same server. Each new arriving customer is placed for service immediately, either into a server already serving other customers (as long as packing constraints are not violated) or into an idle server. After a service completion, each customer leaves its server and the system. We propose an extremely simple and easily implementable customer placement algorithm, called Greedy-Random (GRAND). It places each arriving customer uniformly at random into either one of the already occupied servers (subject to packing constraints) or one of the so-called zero-servers, which are empty servers designated to be available to new arrivals. One instance of GRAND, called GRAND($$aZ$$aZ), where $$a\ge 0$$a≥0 is a parameter, is such that the number of zero-servers at any given time $$t$$t is $$aZ(t)$$aZ(t), where $$Z(t)$$Z(t) is the current total number of customers in the system. We prove that GRAND($$aZ$$aZ) with $$a>0$$a>0 is asymptotically optimal, as the customer arrival rates grow to infinity and $$a\rightarrow 0$$a→0, in the sense of minimizing the total number of occupied servers in steady state. In addition, we study by simulations various versions of GRAND and observe the dependence of convergence speed and steady-state performance on the number of zero-servers.

KW - Cloud computing

KW - Fluid limit

KW - Greedy random algorithm

KW - Infinite server system

KW - Queueing networks

KW - Stochastic bin packing

KW - Virtual machine

UR - http://www.scopus.com/inward/record.url?scp=84904532402&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84904532402&partnerID=8YFLogxK

U2 - 10.1007/s11134-014-9414-x

DO - 10.1007/s11134-014-9414-x

M3 - Article

AN - SCOPUS:84904532402

VL - 79

SP - 117

EP - 143

JO - Queueing Systems

JF - Queueing Systems

SN - 0257-0130

IS - 2

ER -