Abstract
This paper provides a formal analysis of a powerful mapping technique known as scatter decomposition. Scatter decomposition divides an irregular computational domain into a large number of equal sized pieces, and distributes them modularly among processors. We use a probabilistic model of workload in one dimension to formally explain why and when scatter decomposition works. Our first result is that if correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. Our second result shows that if the workload process is stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally we show that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.
Original language | English (US) |
---|---|
Pages (from-to) | 1337-1345 |
Number of pages | 9 |
Journal | IEEE Transactions on Computers |
Volume | 39 |
Issue number | 11 |
DOIs | |
State | Published - Nov 1990 |
Externally published | Yes |
Keywords
- Mapping problem
- parallel processing
- performance analysis
- scatter decomposition
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics