TY - JOUR
T1 - Optimizing noncontiguous accesses in MPI-IO
AU - Thakur, Rajeev
AU - Gropp, William
AU - Lusk, Ewing
N1 - Funding Information:
This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing Research, US Department of Energy, under Contract W-31-109-Eng-38; and by the Scalable I/O Initiative, a multi-agency project funded by the Defense Advanced Research Projects Agency (contract number DABT63-94-C-0049), the Department of Energy, the National Aeronautics and Space Administration, and the National Science Foundation. We thank the Center for Advanced Computing Research at California Institute of Technology, the National Center for Supercomputing Applications at the University of Illinois, and the National Aerospace Laboratory (NLR) in Holland for providing access to their machines.
PY - 2002/1
Y1 - 2002/1
N2 - The I/O access patterns of many parallel applications consist of accesses to a large number of small, noncontiguous pieces of data. If an application's I/O needs are met by making many small, distinct I/O requests, however, the I/O performance degrades drastically. To avoid this problem, MPI-IO allows users to access noncontiguous data with a single I/O function call, unlike in Unix I/O. In this paper, we explain how critical this feature of MPI-IO is for high performance and how it enables implementations to perform optimizations. We first provide a classification of the different ways of expressing an application's I/O needs in MPI-IO - we classify them into four levels, called levels 0-3. We demonstrate that, for applications with noncontiguous access patterns, the I/O performance improves dramatically if users write their applications to make level-3 requests (noncontiguous, collective) rather than level-0 requests (Unix style). We then describe how our MPI-IO implementation, ROMIO, delivers high performance for noncontiguous requests. We explain in detail the two key optimizations ROMIO performs: data sieving for noncontiguous requests from one process and collective I/O for noncontiguous requests from multiple processes. We describe how we have implemented these optimizations portably on multiple machines and file systems, controlled their memory requirements, and also achieved high performance. We demonstrate the performance and portability with performance results for three applications - an astrophysics-application template (DIST3D), the NAS BTIO benchmark, and an unstructured code (UNSTRUC) - on five different parallel machines: HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, and SGI Origin2000.
AB - The I/O access patterns of many parallel applications consist of accesses to a large number of small, noncontiguous pieces of data. If an application's I/O needs are met by making many small, distinct I/O requests, however, the I/O performance degrades drastically. To avoid this problem, MPI-IO allows users to access noncontiguous data with a single I/O function call, unlike in Unix I/O. In this paper, we explain how critical this feature of MPI-IO is for high performance and how it enables implementations to perform optimizations. We first provide a classification of the different ways of expressing an application's I/O needs in MPI-IO - we classify them into four levels, called levels 0-3. We demonstrate that, for applications with noncontiguous access patterns, the I/O performance improves dramatically if users write their applications to make level-3 requests (noncontiguous, collective) rather than level-0 requests (Unix style). We then describe how our MPI-IO implementation, ROMIO, delivers high performance for noncontiguous requests. We explain in detail the two key optimizations ROMIO performs: data sieving for noncontiguous requests from one process and collective I/O for noncontiguous requests from multiple processes. We describe how we have implemented these optimizations portably on multiple machines and file systems, controlled their memory requirements, and also achieved high performance. We demonstrate the performance and portability with performance results for three applications - an astrophysics-application template (DIST3D), the NAS BTIO benchmark, and an unstructured code (UNSTRUC) - on five different parallel machines: HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, and SGI Origin2000.
KW - Collective I/O
KW - Data sieving
KW - MPI-IO
KW - Parallel I/O
UR - http://www.scopus.com/inward/record.url?scp=0036133255&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0036133255&partnerID=8YFLogxK
U2 - 10.1016/S0167-8191(01)00129-6
DO - 10.1016/S0167-8191(01)00129-6
M3 - Article
AN - SCOPUS:0036133255
SN - 0167-8191
VL - 28
SP - 83
EP - 105
JO - Parallel Computing
JF - Parallel Computing
IS - 1
ER -