Abstract
This paper extends recent work on the problem of mapping pipelined or parallel computations onto linear array, shared memory, and host—satellite systems. We show how these problems can be solved even more efficiently when computation module execution times are bounded from below, intermodule communication times are bounded from above, and the processors satisfy certain homogeneity constraints. Our improvements have significantly lower time and space complexities than those of the more general algorithms: In one case, we replace an O(nm3) time algorithm for mapping m modules onto n processors with an O(nm log m) time algorithm, and reduce the space requirements from O(nm2) to O(m). We then reduce run-time complexity further with parallel mapping algorithms based on these improvements, that run on the architectures for which they create mappings.
Original language | English (US) |
---|---|
Pages (from-to) | 295-306 |
Number of pages | 12 |
Journal | IEEE Transactions on Computers |
Volume | 40 |
Issue number | 3 |
DOIs | |
State | Published - Mar 1991 |
Externally published | Yes |
Keywords
- Mapping problem
- parallel algorithms
- parallel processing
- performance optimization
- pipelined computations
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computational Theory and Mathematics