Two sides of a coin: Optimizing the schedule of map reduce jobs to minimize their makespan and improve cluster performance

Abhishek Verma, Ludmila Cherkasova, R H Campbell

Research output: Contribution to journalArticle

Abstract

Large-scale Hadoop clusters with their data-intensive, MapReduce style applications, that routinely process petabytes of unstructured and semi-structured data, represent a new entity in the changing landscape of modern data centers. A key challenge is to increase the utilization of these MapReduce clusters. For a set of production jobs that are executed periodically on a new data, we can perform an offline analysis for evaluating performance benefits of different optimization techniques. In this work, we consider a subset of the production workload that consists of MapReduce jobs with no dependencies. We observe that the order in which these jobs are executed can have a significant impact on their overall completion time and the cluster resource utilization. Our goal is to automate the design of a job schedule that minimizes the completion time (makespan) of such a set of MapReduce jobs. We introduce a simple abstraction where each MapReduce job is represented as a pair of map and reduce stage durations. This representation enables us to apply the classic Johnson algorithm that was designed for building an optimal two-stage job schedule. Simulations performed over a set of realistic workloads demonstrate that 10%-25% makespan improvements are achievable by simply processing the jobs in the right order. However, in some cases, the simplified abstraction assumed by Johnson's algorithm may lead to a suboptimal job schedule. We design a novel heuristic, called BalancedPools, that significantly improves Johnson's schedule results (up to 15%-38%), exactly in the situations when it produces suboptimal makespan. The results of our simulation study are validated through experiments on a 66- node Hadoop cluster.

Original languageEnglish (US)
JournalHP Laboratories Technical Report
Issue number127
StatePublished - Jun 21 2012

Fingerprint

Processing
Experiments

Keywords

  • Batch workloads
  • Hadoop
  • MapReduce
  • Minimized makespan
  • Optimized schedule

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications

Cite this

Two sides of a coin : Optimizing the schedule of map reduce jobs to minimize their makespan and improve cluster performance. / Verma, Abhishek; Cherkasova, Ludmila; Campbell, R H.

In: HP Laboratories Technical Report, No. 127, 21.06.2012.

Research output: Contribution to journalArticle

@article{fc7657500fbb42d6a5cc643a6f2a5778,
title = "Two sides of a coin: Optimizing the schedule of map reduce jobs to minimize their makespan and improve cluster performance",
abstract = "Large-scale Hadoop clusters with their data-intensive, MapReduce style applications, that routinely process petabytes of unstructured and semi-structured data, represent a new entity in the changing landscape of modern data centers. A key challenge is to increase the utilization of these MapReduce clusters. For a set of production jobs that are executed periodically on a new data, we can perform an offline analysis for evaluating performance benefits of different optimization techniques. In this work, we consider a subset of the production workload that consists of MapReduce jobs with no dependencies. We observe that the order in which these jobs are executed can have a significant impact on their overall completion time and the cluster resource utilization. Our goal is to automate the design of a job schedule that minimizes the completion time (makespan) of such a set of MapReduce jobs. We introduce a simple abstraction where each MapReduce job is represented as a pair of map and reduce stage durations. This representation enables us to apply the classic Johnson algorithm that was designed for building an optimal two-stage job schedule. Simulations performed over a set of realistic workloads demonstrate that 10{\%}-25{\%} makespan improvements are achievable by simply processing the jobs in the right order. However, in some cases, the simplified abstraction assumed by Johnson's algorithm may lead to a suboptimal job schedule. We design a novel heuristic, called BalancedPools, that significantly improves Johnson's schedule results (up to 15{\%}-38{\%}), exactly in the situations when it produces suboptimal makespan. The results of our simulation study are validated through experiments on a 66- node Hadoop cluster.",
keywords = "Batch workloads, Hadoop, MapReduce, Minimized makespan, Optimized schedule",
author = "Abhishek Verma and Ludmila Cherkasova and Campbell, {R H}",
year = "2012",
month = "6",
day = "21",
language = "English (US)",
journal = "HP Laboratories Technical Report",
number = "127",

}

TY - JOUR

T1 - Two sides of a coin

T2 - Optimizing the schedule of map reduce jobs to minimize their makespan and improve cluster performance

AU - Verma, Abhishek

AU - Cherkasova, Ludmila

AU - Campbell, R H

PY - 2012/6/21

Y1 - 2012/6/21

N2 - Large-scale Hadoop clusters with their data-intensive, MapReduce style applications, that routinely process petabytes of unstructured and semi-structured data, represent a new entity in the changing landscape of modern data centers. A key challenge is to increase the utilization of these MapReduce clusters. For a set of production jobs that are executed periodically on a new data, we can perform an offline analysis for evaluating performance benefits of different optimization techniques. In this work, we consider a subset of the production workload that consists of MapReduce jobs with no dependencies. We observe that the order in which these jobs are executed can have a significant impact on their overall completion time and the cluster resource utilization. Our goal is to automate the design of a job schedule that minimizes the completion time (makespan) of such a set of MapReduce jobs. We introduce a simple abstraction where each MapReduce job is represented as a pair of map and reduce stage durations. This representation enables us to apply the classic Johnson algorithm that was designed for building an optimal two-stage job schedule. Simulations performed over a set of realistic workloads demonstrate that 10%-25% makespan improvements are achievable by simply processing the jobs in the right order. However, in some cases, the simplified abstraction assumed by Johnson's algorithm may lead to a suboptimal job schedule. We design a novel heuristic, called BalancedPools, that significantly improves Johnson's schedule results (up to 15%-38%), exactly in the situations when it produces suboptimal makespan. The results of our simulation study are validated through experiments on a 66- node Hadoop cluster.

AB - Large-scale Hadoop clusters with their data-intensive, MapReduce style applications, that routinely process petabytes of unstructured and semi-structured data, represent a new entity in the changing landscape of modern data centers. A key challenge is to increase the utilization of these MapReduce clusters. For a set of production jobs that are executed periodically on a new data, we can perform an offline analysis for evaluating performance benefits of different optimization techniques. In this work, we consider a subset of the production workload that consists of MapReduce jobs with no dependencies. We observe that the order in which these jobs are executed can have a significant impact on their overall completion time and the cluster resource utilization. Our goal is to automate the design of a job schedule that minimizes the completion time (makespan) of such a set of MapReduce jobs. We introduce a simple abstraction where each MapReduce job is represented as a pair of map and reduce stage durations. This representation enables us to apply the classic Johnson algorithm that was designed for building an optimal two-stage job schedule. Simulations performed over a set of realistic workloads demonstrate that 10%-25% makespan improvements are achievable by simply processing the jobs in the right order. However, in some cases, the simplified abstraction assumed by Johnson's algorithm may lead to a suboptimal job schedule. We design a novel heuristic, called BalancedPools, that significantly improves Johnson's schedule results (up to 15%-38%), exactly in the situations when it produces suboptimal makespan. The results of our simulation study are validated through experiments on a 66- node Hadoop cluster.

KW - Batch workloads

KW - Hadoop

KW - MapReduce

KW - Minimized makespan

KW - Optimized schedule

UR - http://www.scopus.com/inward/record.url?scp=84862309519&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84862309519&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:84862309519

JO - HP Laboratories Technical Report

JF - HP Laboratories Technical Report

IS - 127

ER -