ARIA: Automatic resource inference and allocation for MapReduce environments

Abhishek Verma, Ludmila Cherkasova, Roy H. Campbell

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

MapReduce and Hadoop represent an economically compelling alternative for efficient large scale data processing and advanced analytics in the enterprise. A key challenge in shared MapReduce clusters is the ability to automatically tailor and control resource allocations to different applications for achieving their performance goals. Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required Service Level Objective (SLO). In this work, we propose a framework, called ARIA, to address this problem. It comprises of three inter-related components. First, for a production job that is routinely executed on a new dataset, we build a job profile that compactly summarizes critical performance characteristics of the underlying application during the map and reduce stages. Second, we design a MapReduce performance model, that for a given job (with a known profile) and its SLO (soft deadline), estimates the amount of resources required for job completion within the deadline. Finally, we implement a novel SLO-based scheduler in Hadoop that determines job ordering and the amount of resources to allocate for meeting the job deadlines. We validate our approach using a set of realistic applications. The new scheduler effectively meets the jobs' SLOs until the job demands exceed the cluster resources. The results of the extensive simulation study are validated through detailed experiments on a 66-node Hadoop cluster.

Original languageEnglish (US)
Title of host publicationHP Laboratories Technical Report
Edition58
StatePublished - May 17 2011

Fingerprint

Resource allocation
Industry
Experiments

Keywords

  • Map reduce
  • Modeling
  • Resource allocation
  • Scheduling

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications

Cite this

Verma, A., Cherkasova, L., & Campbell, R. H. (2011). ARIA: Automatic resource inference and allocation for MapReduce environments. In HP Laboratories Technical Report (58 ed.)

ARIA : Automatic resource inference and allocation for MapReduce environments. / Verma, Abhishek; Cherkasova, Ludmila; Campbell, Roy H.

HP Laboratories Technical Report. 58. ed. 2011.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Verma, A, Cherkasova, L & Campbell, RH 2011, ARIA: Automatic resource inference and allocation for MapReduce environments. in HP Laboratories Technical Report. 58 edn.
Verma A, Cherkasova L, Campbell RH. ARIA: Automatic resource inference and allocation for MapReduce environments. In HP Laboratories Technical Report. 58 ed. 2011
Verma, Abhishek ; Cherkasova, Ludmila ; Campbell, Roy H. / ARIA : Automatic resource inference and allocation for MapReduce environments. HP Laboratories Technical Report. 58. ed. 2011.
@inproceedings{2632c0a1d7c44dfc93bdfa031d423635,
title = "ARIA: Automatic resource inference and allocation for MapReduce environments",
abstract = "MapReduce and Hadoop represent an economically compelling alternative for efficient large scale data processing and advanced analytics in the enterprise. A key challenge in shared MapReduce clusters is the ability to automatically tailor and control resource allocations to different applications for achieving their performance goals. Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required Service Level Objective (SLO). In this work, we propose a framework, called ARIA, to address this problem. It comprises of three inter-related components. First, for a production job that is routinely executed on a new dataset, we build a job profile that compactly summarizes critical performance characteristics of the underlying application during the map and reduce stages. Second, we design a MapReduce performance model, that for a given job (with a known profile) and its SLO (soft deadline), estimates the amount of resources required for job completion within the deadline. Finally, we implement a novel SLO-based scheduler in Hadoop that determines job ordering and the amount of resources to allocate for meeting the job deadlines. We validate our approach using a set of realistic applications. The new scheduler effectively meets the jobs' SLOs until the job demands exceed the cluster resources. The results of the extensive simulation study are validated through detailed experiments on a 66-node Hadoop cluster.",
keywords = "Map reduce, Modeling, Resource allocation, Scheduling",
author = "Abhishek Verma and Ludmila Cherkasova and Campbell, {Roy H.}",
year = "2011",
month = "5",
day = "17",
language = "English (US)",
booktitle = "HP Laboratories Technical Report",
edition = "58",

}

TY - GEN

T1 - ARIA

T2 - Automatic resource inference and allocation for MapReduce environments

AU - Verma, Abhishek

AU - Cherkasova, Ludmila

AU - Campbell, Roy H.

PY - 2011/5/17

Y1 - 2011/5/17

N2 - MapReduce and Hadoop represent an economically compelling alternative for efficient large scale data processing and advanced analytics in the enterprise. A key challenge in shared MapReduce clusters is the ability to automatically tailor and control resource allocations to different applications for achieving their performance goals. Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required Service Level Objective (SLO). In this work, we propose a framework, called ARIA, to address this problem. It comprises of three inter-related components. First, for a production job that is routinely executed on a new dataset, we build a job profile that compactly summarizes critical performance characteristics of the underlying application during the map and reduce stages. Second, we design a MapReduce performance model, that for a given job (with a known profile) and its SLO (soft deadline), estimates the amount of resources required for job completion within the deadline. Finally, we implement a novel SLO-based scheduler in Hadoop that determines job ordering and the amount of resources to allocate for meeting the job deadlines. We validate our approach using a set of realistic applications. The new scheduler effectively meets the jobs' SLOs until the job demands exceed the cluster resources. The results of the extensive simulation study are validated through detailed experiments on a 66-node Hadoop cluster.

AB - MapReduce and Hadoop represent an economically compelling alternative for efficient large scale data processing and advanced analytics in the enterprise. A key challenge in shared MapReduce clusters is the ability to automatically tailor and control resource allocations to different applications for achieving their performance goals. Currently, there is no job scheduler for MapReduce environments that given a job completion deadline, could allocate the appropriate amount of resources to the job so that it meets the required Service Level Objective (SLO). In this work, we propose a framework, called ARIA, to address this problem. It comprises of three inter-related components. First, for a production job that is routinely executed on a new dataset, we build a job profile that compactly summarizes critical performance characteristics of the underlying application during the map and reduce stages. Second, we design a MapReduce performance model, that for a given job (with a known profile) and its SLO (soft deadline), estimates the amount of resources required for job completion within the deadline. Finally, we implement a novel SLO-based scheduler in Hadoop that determines job ordering and the amount of resources to allocate for meeting the job deadlines. We validate our approach using a set of realistic applications. The new scheduler effectively meets the jobs' SLOs until the job demands exceed the cluster resources. The results of the extensive simulation study are validated through detailed experiments on a 66-node Hadoop cluster.

KW - Map reduce

KW - Modeling

KW - Resource allocation

KW - Scheduling

UR - http://www.scopus.com/inward/record.url?scp=79955884127&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79955884127&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:79955884127

BT - HP Laboratories Technical Report

ER -