TY - GEN
T1 - CuSZp
T2 - 2023 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2023
AU - Huang, Yafan
AU - Di, Sheng
AU - Yu, Xiaodong
AU - Li, Guanpeng
AU - Cappello, Franck
N1 - This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations – the Office of Science and the National Nuclear Security Administration, responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, to support the nation’s exascale computing imperative. The material was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (ASCR), under contract DE-AC02-06CH11357, and supported by the National Science Foundation under Grant OAC-2003709, OAC-2104023, OAC-2211538, and OAC-2211539/2247060. We acknowledge the computing resources provided on Bebop (operated by Laboratory Computing Resource Center at Argonne) and on Theta and JLSE (operated by Argonne Leadership Computing Facility). We acknowledge the support of ARAMCO.
PY - 2023/11/12
Y1 - 2023/11/12
N2 - Modern scientific applications and supercomputing systems are generating large amounts of data in various fields, leading to critical challenges in data storage footprints and communication times. To address this issue, error-bounded GPU lossy compression has been widely adopted, since it can reduce the volume of data within a customized threshold on data distortion. In this work, we propose an ultra-fast error-bounded GPU lossy compressor cuSZp. Specifically, cuSZp computes the linear recurrences with hierarchical parallelism to fuse the massive computation into one kernel, drastically improving the end-to-end throughput. In addition, cuSZp adopts a block-wise design along with a lightweight fixed-length encoding and bit-shuffle inside each block such that it achieves high compression ratios and data quality. Our experiments on NVIDIA A100 GPU with 6 representative scientific datasets demonstrate that cuSZp can achieve an ultra-fast end-to-end throughput (95.53x compared with cuSZ) along with a high compression ratio and high reconstructed data quality.
AB - Modern scientific applications and supercomputing systems are generating large amounts of data in various fields, leading to critical challenges in data storage footprints and communication times. To address this issue, error-bounded GPU lossy compression has been widely adopted, since it can reduce the volume of data within a customized threshold on data distortion. In this work, we propose an ultra-fast error-bounded GPU lossy compressor cuSZp. Specifically, cuSZp computes the linear recurrences with hierarchical parallelism to fuse the massive computation into one kernel, drastically improving the end-to-end throughput. In addition, cuSZp adopts a block-wise design along with a lightweight fixed-length encoding and bit-shuffle inside each block such that it achieves high compression ratios and data quality. Our experiments on NVIDIA A100 GPU with 6 representative scientific datasets demonstrate that cuSZp can achieve an ultra-fast end-to-end throughput (95.53x compared with cuSZ) along with a high compression ratio and high reconstructed data quality.
KW - CUDA
KW - GPU
KW - error-bounded lossy compression
KW - high-speed compressor
KW - parallel computing
KW - scientific simulation
UR - http://www.scopus.com/inward/record.url?scp=85178137474&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85178137474&partnerID=8YFLogxK
U2 - 10.1145/3581784.3607048
DO - 10.1145/3581784.3607048
M3 - Conference contribution
AN - SCOPUS:85178137474
T3 - Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2023
BT - Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2023
PB - Association for Computing Machinery
Y2 - 12 November 2023 through 17 November 2023
ER -