An Analysis of Scatter Decomposition

David M. Nicol, Joel H. Saltz

Research output: Contribution to journalArticlepeer-review

Abstract

This paper provides a formal analysis of a powerful mapping technique known as scatter decomposition. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. We use a probabilistic model of workload in one dimension to formally explain why and when scatter decomposition works. Our first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. Our second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally we show that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.

Original languageEnglish (US)
Pages (from-to)1337-1345
Number of pages9
JournalIEEE Transactions on Computers
Volume39
Issue number11
DOIs
StatePublished - Nov 1990
Externally publishedYes

Keywords

  • Mapping problem
  • parallel processing
  • performance analysis
  • scatter decomposition

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'An Analysis of Scatter Decomposition'. Together they form a unique fingerprint.

Cite this