TY - GEN
T1 - HPVM
T2 - 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2018
AU - Kotsifakou, Maria
AU - Srivastava, Prakalp
AU - Sinclair, Matthew D.
AU - Komuravelli, Rakesh
AU - Adve, Vikram
AU - Adve, Sarita
N1 - Publisher Copyright:
© 2018 Copyright held by the owner/author(s).
PY - 2018/2/10
Y1 - 2018/2/10
N2 - We propose a parallel program representation for heterogeneous systems, designed to enable performance portability across a wide range of popular parallel hardware, including GPUs, vector instruction sets, multicore CPUs and potentially FPGAs. Our representation, which we call HPVM, is a hierarchical dataflow graph with shared memory and vector instructions. HPVM supports three important capabilities for programming heterogeneous systems: A compiler intermediate representation (IR), a virtual instruction set (ISA), and a basis for runtime scheduling; previous systems focus on only one of these capabilities. As a compiler IR, HPVM aims to enable effective code generation and optimization for heterogeneous systems. As a virtual ISA, it can be used to ship executable programs, in order to achieve both functional portability and performance portability across such systems. At runtime, HPVM enables flexible scheduling policies, both through the graph structure and the ability to compile individual nodes in a program to any of the target devices on a system. We have implemented a prototype HPVM system, defining the HPVM IR as an extension of the LLVM compiler IR, compiler optimizations that operate directly on HPVM graphs, and code generators that translate the virtual ISA to NVIDIA GPUs, Intel's AVX vector units, and to multicore X86-64 processors. Experimental results showthat HPVMoptimizations achieve significant performance improvements, HPVM translators achieve performance competitive with manually developed OpenCL code for both GPUs and vector hardware, and that runtime scheduling policies can make use of both program and runtime information to exploit the flexible compilation capabilities. Overall, we conclude that the HPVM representation is a promising basis for achieving performance portability and for implementing parallelizing compilers for heterogeneous parallel systems.
AB - We propose a parallel program representation for heterogeneous systems, designed to enable performance portability across a wide range of popular parallel hardware, including GPUs, vector instruction sets, multicore CPUs and potentially FPGAs. Our representation, which we call HPVM, is a hierarchical dataflow graph with shared memory and vector instructions. HPVM supports three important capabilities for programming heterogeneous systems: A compiler intermediate representation (IR), a virtual instruction set (ISA), and a basis for runtime scheduling; previous systems focus on only one of these capabilities. As a compiler IR, HPVM aims to enable effective code generation and optimization for heterogeneous systems. As a virtual ISA, it can be used to ship executable programs, in order to achieve both functional portability and performance portability across such systems. At runtime, HPVM enables flexible scheduling policies, both through the graph structure and the ability to compile individual nodes in a program to any of the target devices on a system. We have implemented a prototype HPVM system, defining the HPVM IR as an extension of the LLVM compiler IR, compiler optimizations that operate directly on HPVM graphs, and code generators that translate the virtual ISA to NVIDIA GPUs, Intel's AVX vector units, and to multicore X86-64 processors. Experimental results showthat HPVMoptimizations achieve significant performance improvements, HPVM translators achieve performance competitive with manually developed OpenCL code for both GPUs and vector hardware, and that runtime scheduling policies can make use of both program and runtime information to exploit the flexible compilation capabilities. Overall, we conclude that the HPVM representation is a promising basis for achieving performance portability and for implementing parallelizing compilers for heterogeneous parallel systems.
KW - Compiler
KW - GPU
KW - Heterogeneous Systems
KW - Parallel IR
KW - Vector SIMD
KW - Virtual ISA
UR - http://www.scopus.com/inward/record.url?scp=85044307152&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85044307152&partnerID=8YFLogxK
U2 - 10.1145/3178487.3178493
DO - 10.1145/3178487.3178493
M3 - Conference contribution
AN - SCOPUS:85044307152
T3 - Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP
SP - 68
EP - 80
BT - PPoPP 2018 - Proceedings of the 23rd Principles and Practice of Parallel Programming
PB - Association for Computing Machinery
Y2 - 24 February 2018 through 28 February 2018
ER -