Abstract
This work optimizes tensor-times-dense matrix multiply (Ttm) for general sparse and semi-sparse tensors on CPU and NVIDIA GPU platforms. Ttm is a computational kernel in tensor methods-based data analytics and data mining applications, such as the popular Tucker decomposition. We first design an in-place sequential SpTtm to avoid explicit data reorganizing between a tensor and a matrix in its conventional approach. We further optimize SpTtm on NVIDIA GPU platforms. Five approaches including employing fine thread granularity, arranging coalesced memory access, rank blocking, and using fast GPU shared memory are developed for GPU-SpTtm. We also optimize semi-sparse tensor-times-dense matrix multiply (SspTtm) to take advantage of the inside dense sub-structures. The optimized SpTtm and SspTtm are applied to Tucker decomposition to improve its overall performance. Our sequential SpTtm is 3–120× faster than the SpTtm from Tensor Toolbox library. GPU-SpTtm obtains 6–19× speedup on NVIDIA K40c and 23–67× speedup on NVIDIA P100 over CPU-SpTtm respectively. Our GPU-SpTtm is 3.9× faster than the state-of-the-art GPU implementation. Our SspTtm implementations outperform SpTtms by up to 4.5×, which handles the input semi-sparse tensor in a general way. Tucker decomposition achieves up to 3.2× speedup after applying the optimized Ttms. The code will be publicly released in ParTI! library: https://github.com/hpcgarage/ParTI.
Original language | English (US) |
---|---|
Pages (from-to) | 99-109 |
Number of pages | 11 |
Journal | Journal of Parallel and Distributed Computing |
Volume | 129 |
DOIs | |
State | Published - Jul 2019 |
Externally published | Yes |
Keywords
- GPU
- Irregular algorithms
- Sparse tensors
- Tensor decomposition
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computer Networks and Communications
- Artificial Intelligence