Enabling communication concurrency through flexible MPI endpoints

James Dinan, Ryan E. Grant, Pavan Balaji, David Goodell, Douglas Miller, Marc Snir, Rajeev Thakur

Research output: Contribution to journalArticlepeer-review

Abstract

MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. This paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ranks. Using endpoints, an MPI implementation can map separate communication contexts to threads, allowing them to drive communication independently. Endpoints also enable threads to be addressable in MPI operations, enhancing interoperability between MPI and other programming models. These characteristics are illustrated through several examples and an empirical study that contrasts current multithreaded communication performance with the need for high degrees of communication concurrency to achieve peak communication performance.

Original languageEnglish (US)
Pages (from-to)390-405
Number of pages16
JournalInternational Journal of High Performance Computing Applications
Volume28
Issue number4
DOIs
StatePublished - Nov 20 2014
Externally publishedYes

Keywords

  • MPI
  • communication concurrency
  • endpoints
  • hybrid parallel programming
  • interoperability

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Enabling communication concurrency through flexible MPI endpoints'. Together they form a unique fingerprint.

Cite this