Techniques for reducing the overhead of run-time parallelization

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Current parallelizing compilers cannot identify a significant fraction of parallelizable loops because they have complex or statically insufficiently defined access patterns. As parallelizable loops arise frequently in practice, we have introduced a novel framework for their identification: speculative parallelization. While we have previously shown that this method is inherently scalable its practical success depends on the fraction of ideal speedup that can be obtained on modest to moderately large parallel machines. Maximum parallelism can be obtained only through a minimization of the run-time overhead of the method, which in turn depends on its level of integration within a classic restructuring compiler and on its adaptation to characteristics of the parallelized application. We present several compiler and run-time techniques designed specifically for optimizing the run-time parallelization of sparse applications. We show how we minimize the run-time overhead associated with the speculative parallelization of sparse applications by using static control flow information to reduce the number of memory references that have to be collected at run-time. We then present heuristics to speculate on the type and data structures used by the program and thus reduce the memory requirements needed for tracing the sparse access patterns. We present an implementation in the Polaris infrastructure and experimental results.

Original languageEnglish (US)
Title of host publicationCompiler Construction - 9th International Conference, CC 2000 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2000, Proceedings
EditorsDavid A. Watt
PublisherSpringer-Verlag
Pages232-248
Number of pages17
ISBN (Print)354067263X, 9783540672630
StatePublished - Jan 1 2000
Externally publishedYes
Event9th International Conference on Compiler Construction, CC 2000 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2000 - Berlin, Germany
Duration: Mar 25 2000Apr 2 2000

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1781
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other9th International Conference on Compiler Construction, CC 2000 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2000
CountryGermany
CityBerlin
Period3/25/004/2/00

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Techniques for reducing the overhead of run-time parallelization'. Together they form a unique fingerprint.

  • Cite this

    Yu, H., & Rauchwerger, L. (2000). Techniques for reducing the overhead of run-time parallelization. In D. A. Watt (Ed.), Compiler Construction - 9th International Conference, CC 2000 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2000, Proceedings (pp. 232-248). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1781). Springer-Verlag.