Abstract
This paper takes a renewed look at the problem of managing intermediate data that is generated during dataflow computations (e.g., MapReduce, Pig, Dryad, etc.) within clouds. We discuss salient features of this intermediate data and outline requirements for a solution. Our experiments show that existing local writeremote read solutions, traditional distributed file systems (e.g., HDFS), and support from transport protocols (e.g., TCP-Nice) cannot guarantee both data availability and minimal interference, which are our key requirements. We present design ideas for a new intermediate data storage system.
Original language | English (US) |
---|---|
State | Published - 2009 |
Event | 12th Workshop on Hot Topics in Operating Systems, HotOS 2009 - Monte Verita, Switzerland Duration: May 18 2009 → May 20 2009 |
Conference
Conference | 12th Workshop on Hot Topics in Operating Systems, HotOS 2009 |
---|---|
Country/Territory | Switzerland |
City | Monte Verita |
Period | 5/18/09 → 5/20/09 |
ASJC Scopus subject areas
- Hardware and Architecture
- Information Systems
- Computer Networks and Communications