Code transformations to improve memory parallelism

Vijay S. Pai, Sarita Adve

Research output: Contribution to journalArticlepeer-review


Current microprocessors incorporate techniques to exploit instruction-level parallelism (ILP). However, previous work has shown that these ILP techniques are less effective in removing memory stall time than CPU time, making the memory system a greater bottleneck in ILP-based systems than in previous-generation systems. These deficiencies arise largely because applications present limited opportunities for an out-oforder issue processor to overlap multiple read misses, the dominant source of memory stalls. This work proposes code transformations to increase parallelism in the memory system by overlapping multiple read misses within the same instruction window, while preserving cache locality. We present an analysis and transformation framework suitable for compiler implementation. Our simulation experiments show execution time reductions averaging 20% in a multiprocessor and 30% in a uniprocessor. A substantial part of these reductions comes from increases in memory parallelism. We see similar benefits on a Convex Exemplar.

Original languageEnglish (US)
JournalJournal of Instruction-Level Parallelism
StatePublished - May 1 2000


  • Compiler transformations
  • Latency tolerance
  • Memory parallelism
  • Out-of-order issue
  • Unroll-and-jam

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture


Dive into the research topics of 'Code transformations to improve memory parallelism'. Together they form a unique fingerprint.

Cite this