TY - GEN
T1 - At-Scale Sparse Deep Neural Network Inference with Efficient GPU Implementation
AU - Hidayetoglu, Mert
AU - Pearson, Carl
AU - Mailthody, Vikram Sharma
AU - Ebrahimi, Eiman
AU - Xiong, Jinjun
AU - Nagi, Rakesh
AU - Hwu, Wen Mei
N1 - The authors acknowledge Kishore Iyer, Jingning Tang, Hanhaotian Liu, and Volodymyr Kindratenko for their help. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725 and also utilized resources supported by the National Science Foundation’s Major Research Instrumentation program, grant #1725729, as well as the University of Illinois at Urbana-Champaign. This work is supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) and partly supported by the Center for Applications Driving Architectures (ADA) and Center for Research on Intelligent Storage and Processing-in-memory (CRISP), JUMP Centers with prime award coming from SRC.
PY - 2020/9/22
Y1 - 2020/9/22
N2 - This paper presents GPU performance optimization and scaling results for inference models of the Sparse Deep Neural Network Challenge 2020. Demands for network quality have increased rapidly, pushing the size and thus the memory requirements of many neural networks beyond the capacity of available accelerators. Sparse deep neural networks (SpDNN) have shown promise for reining in the memory footprint of large neural networks. However, there is room for improvement in implementing SpDNN operations on GPUs. This work presents optimized sparse matrix multiplication kernels fused with the ReLU function. The optimized kernels reuse input feature maps from the shared memory and sparse weights from registers. For multi-GPU parallelism, our SpDNN implementation duplicates weights and statically partition the feature maps across GPUs. Results for the challenge benchmarks show that the proposed kernel design and multi-GPU parallelization achieve up to 180 TeraEdges per second inference throughput. These results are up to 4.3x faster for a single GPU and an order of magnitude faster at full scale than those of the champion of the 2019 Sparse Deep Neural Network Graph Challenge for the same generation of NVIDIA V100 GPUs. Using the same implementation11 Our code is open-source at https://github.com/merthidayetoglu/SpDNN_Challenge2020, we also show single-GPU throughput on NVIDIA A100 is 2.37x faster than V100.
AB - This paper presents GPU performance optimization and scaling results for inference models of the Sparse Deep Neural Network Challenge 2020. Demands for network quality have increased rapidly, pushing the size and thus the memory requirements of many neural networks beyond the capacity of available accelerators. Sparse deep neural networks (SpDNN) have shown promise for reining in the memory footprint of large neural networks. However, there is room for improvement in implementing SpDNN operations on GPUs. This work presents optimized sparse matrix multiplication kernels fused with the ReLU function. The optimized kernels reuse input feature maps from the shared memory and sparse weights from registers. For multi-GPU parallelism, our SpDNN implementation duplicates weights and statically partition the feature maps across GPUs. Results for the challenge benchmarks show that the proposed kernel design and multi-GPU parallelization achieve up to 180 TeraEdges per second inference throughput. These results are up to 4.3x faster for a single GPU and an order of magnitude faster at full scale than those of the champion of the 2019 Sparse Deep Neural Network Graph Challenge for the same generation of NVIDIA V100 GPUs. Using the same implementation11 Our code is open-source at https://github.com/merthidayetoglu/SpDNN_Challenge2020, we also show single-GPU throughput on NVIDIA A100 is 2.37x faster than V100.
UR - http://www.scopus.com/inward/record.url?scp=85099363806&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099363806&partnerID=8YFLogxK
U2 - 10.1109/HPEC43674.2020.9286206
DO - 10.1109/HPEC43674.2020.9286206
M3 - Conference contribution
AN - SCOPUS:85099363806
T3 - 2020 IEEE High Performance Extreme Computing Conference, HPEC 2020
BT - 2020 IEEE High Performance Extreme Computing Conference, HPEC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE High Performance Extreme Computing Conference, HPEC 2020
Y2 - 21 September 2020 through 25 September 2020
ER -