Scaling eCGA model building via data-intensive computing

Abhishek Verma, Xavier Llorà, Shivaram Venkataraman, David E. Goldberg, Roy H. Campbell

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper shows how the extended compact genetic algorithm can be scaled using data-intensive computing techniques such as MapReduce. Two different frameworks (Hadoop and MongoDB) are used to deploy MapReduce implementations of the compact and extended compact genetic algorithms. Results show that both are good choices to deal with large-scale problems as they can scale with the number of commodity machines, as opposed to previous efforts with other techniques that either required specialized high-performance hardware or shared memory environments.

Original languageEnglish (US)
Title of host publication2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010
DOIs
StatePublished - 2010
Event2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010 - Barcelona, Spain
Duration: Jul 18 2010Jul 23 2010

Publication series

Name2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010

Other

Other2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010
Country/TerritorySpain
CityBarcelona
Period7/18/107/23/10

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Scaling eCGA model building via data-intensive computing'. Together they form a unique fingerprint.

Cite this