When huge is routine: Scaling genetic algorithms and estimation of distribution algorithms via data-intensive computing

Xavier Llorà, Abhishek Verma, Roy H. Campbell, David E. Goldberg

Research output: Contribution to journalArticlepeer-review

Abstract

Data-intensive computing has emerged as a key player for processing large volumes of data exploiting massive parallelism. Data-intensive computing frameworks have shown that terabytes and petabytes of data can be routinely processed. However, there has been little effort to explore how data-intensive computing can help scale evolutionary computation. In this book chapter we explore how evolutionary computation algorithms can be modeled using two different data-intensive frameworks-Yahoo!'s Hadoop and NCSA's Meandre. We present a detailed step-by-step description of how three different evolutionary computation algorithms, having different execution profiles, can be translated into the data-intensive computing paradigms. Results show that (1) Hadoop is an excellent choice to push evolutionary computation boundaries on very large problems, and (2) that transparent Meandre linear speedups are possible without changing the underlying data-intensive flow thanks to its inherent parallel processing.

Original languageEnglish (US)
Pages (from-to)11-41
Number of pages31
JournalStudies in Computational Intelligence
Volume269
DOIs
StatePublished - 2010

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'When huge is routine: Scaling genetic algorithms and estimation of distribution algorithms via data-intensive computing'. Together they form a unique fingerprint.

Cite this