Abstract
The MPI API provides support for Cartesian process topologies, including the option to reorder the processes to achieve better communication performance. But MPI implementations rarely provide anything useful for the reorder option, typically ignoring it. One argument made is that modern interconnects are fast enough that applications are less sensitive to the exact layout of processes onto the system. However, intranode communication performance is much greater than internode communication performance. In this paper, we show a simple approach that takes into account only information about which MPI processes are on the same node to provide a fast and effective implementation of the MPI Cartesian topology routine. While not optimal, this approach provides a significant improvement over all tested MPI implementations and provides an implementation that may be used as the default in any MPI implementation of MPI_Cart_create. We also explore the impact of taking into account the mapping of processes to processor chips or sockets, and show that this is both relatively easy to accomplish but provides only a small improvement in performance.
Original language | English (US) |
---|---|
Pages (from-to) | 98-108 |
Number of pages | 11 |
Journal | Parallel Computing |
Volume | 85 |
DOIs | |
State | Published - Jul 2019 |
Keywords
- Cartesian process topology
- MPI
- Message passing
- Process topology
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computer Networks and Communications
- Computer Graphics and Computer-Aided Design
- Artificial Intelligence