TY - GEN
T1 - Principles of speculative run-time parallelization
AU - Patel, Devang
AU - Rauchwerger, Lawrence
N1 - Publisher Copyright:
© Springer-Verlag Berlin Heidelberg 1999.
PY - 1999
Y1 - 1999
N2 - Current parallelizing compilers cannot identify a significant fraction of parallelizable loops because they have complex or statically insufficiently defined access patterns. We advocate a novel framework for the identification of parallel loops. It speculatively executes a loop as a doall and applies a fully parallel data dependence test to check for any unsatisfied data dependencies; if the test fails, then the loop is re-executed serially. We will present the principles of the design and implementation of a compiler that employs both run-time and static techniques to parallelize dynamic applications. Run-time optimizations always represent a tradeoff between a speculated potential benefit and a certain (sure) overhead that must be paid. We will introduce techniques that take advantage of classic compiler methods to reduce the cost of run-time optimization thus tilting the outcome of speculation in favor of significant performance gains. Experimental results from the PERFECT, SPEC and NCSA Benchmark suites show that these techniques yield speedups not obtainable by any other known method.
AB - Current parallelizing compilers cannot identify a significant fraction of parallelizable loops because they have complex or statically insufficiently defined access patterns. We advocate a novel framework for the identification of parallel loops. It speculatively executes a loop as a doall and applies a fully parallel data dependence test to check for any unsatisfied data dependencies; if the test fails, then the loop is re-executed serially. We will present the principles of the design and implementation of a compiler that employs both run-time and static techniques to parallelize dynamic applications. Run-time optimizations always represent a tradeoff between a speculated potential benefit and a certain (sure) overhead that must be paid. We will introduce techniques that take advantage of classic compiler methods to reduce the cost of run-time optimization thus tilting the outcome of speculation in favor of significant performance gains. Experimental results from the PERFECT, SPEC and NCSA Benchmark suites show that these techniques yield speedups not obtainable by any other known method.
UR - http://www.scopus.com/inward/record.url?scp=84947934255&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84947934255&partnerID=8YFLogxK
U2 - 10.1007/3-540-48319-5_21
DO - 10.1007/3-540-48319-5_21
M3 - Conference contribution
AN - SCOPUS:84947934255
SN - 3540664262
SN - 9783540664260
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 323
EP - 336
BT - Languages and Compilers for Parallel Computing - 11th International Workshop, LCPC 1998, Proceedings
A2 - Chatterjee, Siddhartha
A2 - Prins, Jan F.
A2 - Carter, Larry
A2 - Ferrante, Jeanne
A2 - Li, Zhiyuan
A2 - Sehr, David
A2 - Yew, Pen-Chung
PB - Springer
T2 - 11th International Workshop on Languages and Compilers for Parallel Computing, LCPC 1998
Y2 - 7 August 1998 through 9 August 1998
ER -