Deadline-based workload management for MapReduce environments: Pieces of the perfromance puzzle

Abhishek Verma, Ludmila Cherkasova, Vijay S. Kumar, Roy H. Campbell

Research output: Contribution to journalArticlepeer-review

Abstract

Hadoop and the associated MapReduce paradigm have become the de facto platform for cost-effective analytics over "Big Data". There is an increasing number of MapReduce applications associated with live business intelligence that require completion time guarantees. In this work, we introduce and analyze a set of complementary mechanisms that enhance workload management decisions for processing MapReduce jobs with deadlines. The three mechanisms we consider are the following: 1) a policy for job ordering in the processing queue; 2) a mechanism for allocating a tailored number of map and reduce slots to each job with a completion time requirement; 3) a mechanism for allocating and deallocating (if necessary) spare resources in the system among the active jobs. We analyze the functionality and performance benefits of each mechanism via an extensive set of simulations over diverse workload sets. The proposed mechanisms form the integral pieces in the performance puzzle of automated workload management in MapReduce environments.

Original languageEnglish (US)
JournalHP Laboratories Technical Report
Issue number82
StatePublished - 2012

Keywords

  • Hadoop
  • Job scheduling
  • MapReduce
  • Performance
  • Resource allocation

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Deadline-based workload management for MapReduce environments: Pieces of the perfromance puzzle'. Together they form a unique fingerprint.

Cite this