TY - GEN
T1 - Charisma
T2 - 16th International Symposium on High Performance Distributed Computing 2007, HPDC'07 and Co-Located Workshops
AU - Huang, Chao
AU - Kalé, Laxmikant
PY - 2007/8/27
Y1 - 2007/8/27
N2 - The parallel programming paradigm based on migratable objects, as embodied in Charm++, improves programmer productivity by automating resource management. The programmer decomposes an application into a large number of parallel objects, while an intelligent run-time system assigns those objects to processors. It migrates objects among processors to effect dynamic load balance and communication optimizations. In addition, having multiple sets of objects representing distinct computations leads to improved modularity and performance. However, for complex applications involving many sets of objects, Charm++'s programming model tends to obscure the global flow of control in a parallel program: One must look at the code of multiple objects to discern how the multiple sets of objects are orchestrated in a given application. In this paper, we present Charisma, an orchestration notation that allows expression of Charm++ functionality without fragmenting the expression of control flow. Charisma separates expression of parallelism, including control flow and macro data-flow, from sequential components of the program. The sequential components only consume and publish data. Charisma expression of multiple patterns of communication among message-driven objects. A compiler generates Charm++ communication and synchronization code via static dependence analysis. As Charisma out puts standard Charm++ code, the functionality and performance benefits of the adaptive run-time system, such as automatic load balancing, are retained. In the paper, we show that Charisma programs scale up to 1024 processors without introducing undue overhead.
AB - The parallel programming paradigm based on migratable objects, as embodied in Charm++, improves programmer productivity by automating resource management. The programmer decomposes an application into a large number of parallel objects, while an intelligent run-time system assigns those objects to processors. It migrates objects among processors to effect dynamic load balance and communication optimizations. In addition, having multiple sets of objects representing distinct computations leads to improved modularity and performance. However, for complex applications involving many sets of objects, Charm++'s programming model tends to obscure the global flow of control in a parallel program: One must look at the code of multiple objects to discern how the multiple sets of objects are orchestrated in a given application. In this paper, we present Charisma, an orchestration notation that allows expression of Charm++ functionality without fragmenting the expression of control flow. Charisma separates expression of parallelism, including control flow and macro data-flow, from sequential components of the program. The sequential components only consume and publish data. Charisma expression of multiple patterns of communication among message-driven objects. A compiler generates Charm++ communication and synchronization code via static dependence analysis. As Charisma out puts standard Charm++ code, the functionality and performance benefits of the adaptive run-time system, such as automatic load balancing, are retained. In the paper, we show that Charisma programs scale up to 1024 processors without introducing undue overhead.
KW - Adaptivity
KW - Migratable objects
KW - Orchestration
KW - Parallel programming productivity
UR - http://www.scopus.com/inward/record.url?scp=34548082740&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34548082740&partnerID=8YFLogxK
U2 - 10.1145/1272366.1272377
DO - 10.1145/1272366.1272377
M3 - Conference contribution
AN - SCOPUS:34548082740
SN - 1595936734
SN - 9781595936738
T3 - Proceedings of the 16th International Symposium on High Performance Distributed Computing 2007, HPDC'07
SP - 75
EP - 84
BT - Proceedings of the 16th International Symposium on High Performance Distributed Computing 2007, HPDC'07
Y2 - 25 June 2007 through 29 June 2007
ER -