Analysis of topology-dependent MPI performance on gemini networks

Antonio J. Peña, Ralf G.Correa Carvalho, James Dinan, Pavan Balaji, Rajeev Thakur, William Gropp

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Current HPC systems utilize a variety of interconnection networks, with varying features and communication characteristics. MPI normalizes these interconnects with a common interface used by most HPC applications. However, network properties can have a significant impact on application performance. We explore the impact of the interconnect on application performance on the Blue Waters supercomputer. Blue Waters uses a three-dimensional, Cray Gemini torus network, which provides twice the Y-dimension bandwidth in the X and Z dimensions. Through several benchmarks, including a halo-exchange example, we demonstrate that application-level mapping to the network topology yields significant performance improvements.

Original languageEnglish (US)
Title of host publicationProceedings of the 20th European MPI Users' Group Meeting, EuroMPI 2013
PublisherAssociation for Computing Machinery
Number of pages6
ISBN (Print)9788461651337
StatePublished - 2013
Event20th European MPI Users' Group Meeting, EuroMPI 2013 - Madrid, Spain
Duration: Sep 15 2013Sep 18 2013

Publication series

NameACM International Conference Proceeding Series


Other20th European MPI Users' Group Meeting, EuroMPI 2013


  • Gemini
  • Interconnection network
  • MPI
  • Network topology

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Computer Networks and Communications


Dive into the research topics of 'Analysis of topology-dependent MPI performance on gemini networks'. Together they form a unique fingerprint.

Cite this