Efficient algorithm for the run-time parallelization of DOACROSS loops

Ding Kai Chen, Josep Torrellas, Pen Chung Yew

Research output: Contribution to journalConference articlepeer-review

Abstract

While automatic parallelization of loops usually relies on compile-time analysis of data dependences, for some loops the data dependences cannot be determined at compile time. An example is loops accessing arrays with subscripted subscripts. To parallelize these loops, it is necessary to perform run-time analysis. In this paper, we present a new algorithm to parallelize these loops at run time. Our scheme handles any type of data dependence in the loop without requiring any special architectural support in the multiprocessor. Furthermore, compared to an older scheme with the same generality, our scheme significantly reduces the amount of processor communication required and increases the overlap among dependent iterations. We evaluate our algorithm with parameterized loops running on the 32-processor Cedar shared-memory multiprocessor. The results show speedups over the serial code up to 14 with the full overhead of run-time analysis and of up to 27 if part of the analysis is reused across loop invocations. Moreover, the algorithm outperforms the older scheme in nearly all cases, reaching speedups of up to times when the loop has many dependences.

Original languageEnglish (US)
Pages (from-to)518-527
Number of pages10
JournalProceedings of the ACM/IEEE Supercomputing Conference
DOIs
StatePublished - 1994
EventProceedings of the 1994 Supercomputing Conference - Washington, DC, USA
Duration: Nov 14 1994Nov 18 1994

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Efficient algorithm for the run-time parallelization of DOACROSS loops'. Together they form a unique fingerprint.

Cite this