TY - GEN
T1 - Integrated compilation and scalability analysis for parallel systems
AU - Mendes, Celso L.
AU - Reed, Daniel A.
N1 - Funding Information:
ySupported in part by the Defense Advanced Research Projects Agency under DARPA contracts DABT63-94-C0049 (SIO Initiative), F30602-96-C-0161 and DABT63-96-C-0027, by the National Science Foundation under grants NSF CDA 94-01124 and ASC 97-20202, and by the Department of Energy under contracts DOE B-341494, W-7405-ENG-48 and 1-B-333164.
Funding Information:
Supported in part by a scholarship from CNPq/Brazil.
Publisher Copyright:
© 1998 IEEE.
PY - 1998
Y1 - 1998
N2 - Despite the performance potential of parallel systems, several factors have hindered their widespread adoption. Of these, performance variability is among the most significant. Data parallel languages, which facilitate the programming of those systems, increase the semantic distance between the program's source code and its observable performance, thus aggravating the optimization problem. In this paper, we present a new methodology to automatically predict the performance scalability of data parallel applications on multicomputers. Our technique represents the execution time of a program as a symbolic expression that includes the number of processors (P), problem size (N), and other system-dependent parameters. This methodology is strongly based on information collected at compile time. By extending an existing data parallel compiler (Fortran D95), we derive during compilation, a symbolic cost model that represents the expected cost of each high-level code section and, inductively, of the complete program.
AB - Despite the performance potential of parallel systems, several factors have hindered their widespread adoption. Of these, performance variability is among the most significant. Data parallel languages, which facilitate the programming of those systems, increase the semantic distance between the program's source code and its observable performance, thus aggravating the optimization problem. In this paper, we present a new methodology to automatically predict the performance scalability of data parallel applications on multicomputers. Our technique represents the execution time of a program as a symbolic expression that includes the number of processors (P), problem size (N), and other system-dependent parameters. This methodology is strongly based on information collected at compile time. By extending an existing data parallel compiler (Fortran D95), we derive during compilation, a symbolic cost model that represents the expected cost of each high-level code section and, inductively, of the complete program.
UR - http://www.scopus.com/inward/record.url?scp=84966563866&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84966563866&partnerID=8YFLogxK
U2 - 10.1109/PACT.1998.727287
DO - 10.1109/PACT.1998.727287
M3 - Conference contribution
AN - SCOPUS:84966563866
T3 - Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT
SP - 385
EP - 392
BT - Proceedings - 1998 International Conference on Parallel Architectures and Compilation Techniques, PACT 1998
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 1998 International Conference on Parallel Architectures and Compilation Techniques, PACT 1998
Y2 - 12 October 1998 through 18 October 1998
ER -