Abstract
MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. This paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ranks. Using endpoints, an MPI implementation can map separate communication contexts to threads, allowing them to drive communication independently. Endpoints also enable threads to be addressable in MPI operations, enhancing interoperability between MPI and other programming models. These characteristics are illustrated through several examples and an empirical study that contrasts current multithreaded communication performance with the need for high degrees of communication concurrency to achieve peak communication performance.
Original language | English (US) |
---|---|
Pages (from-to) | 390-405 |
Number of pages | 16 |
Journal | International Journal of High Performance Computing Applications |
Volume | 28 |
Issue number | 4 |
DOIs | |
State | Published - Nov 20 2014 |
Externally published | Yes |
Keywords
- MPI
- communication concurrency
- endpoints
- hybrid parallel programming
- interoperability
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture