In-place transposition of rectangular matrices on accelerators

I. Jui Sung, Juan Gómez-Luna, José María González-Linares, Nicolás Guil, Wen-Mei W Hwu

Research output: Contribution to journalArticle

Abstract

Matrix transposition is an important algorithmic building block for many numeric algorithms such as FFT. It has also been used to convert the storage layout of arrays. With more and more algebra libraries offloaded to GPUs, a high performance in-place transposition becomes necessary. Intuitively, in-place transposition should be a good fit for GPU architectures due to limited available on-board memory capacity and high throughput. However, direct application of CPU in-place transposition algorithms lacks the amount of parallelism and locality required by GPUs to achieve good performance. In this paper we present the first known inplace matrix transposition approach for the GPUs. Our implementation is based on a novel 3-stage transposition algorithm where each stage is performed using an elementary tiled-wise transposition. Additionally, when transposition is done as part of the memory transfer between GPU and host, our staged approach allows hiding transposition overhead by overlap with PCIe transfer. We show that the 3-stage algorithm allows larger tiles and achieves 3X speedup over a traditional 4-stage algorithm, with both algorithms based on our high-performance elementary transpositions on the GPU. We also show our proposed low-level optimizations improve the sustained throughput to more than 20 GB/s. Finally, we propose an asynchronous execution scheme that allows CPU threads to delegate in-place matrix transposition to GPU, achieving a throughput of more than 3.4 GB/s (in cluding data transfers costs), and improving current multithreaded implementations of in-place transposition on CPU.

Original languageEnglish (US)
Pages (from-to)207-218
Number of pages12
JournalACM SIGPLAN Notices
Volume49
Issue number8
DOIs
StatePublished - Aug 2014

Fingerprint

Particle accelerators
Program processors
Throughput
Data storage equipment
Data transfer
Tile
Graphics processing unit
Fast Fourier transforms
Algebra
Costs

Keywords

  • GPU
  • In-Place
  • Transposition

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Sung, I. J., Gómez-Luna, J., González-Linares, J. M., Guil, N., & Hwu, W-M. W. (2014). In-place transposition of rectangular matrices on accelerators. ACM SIGPLAN Notices, 49(8), 207-218. https://doi.org/10.1145/2555243.2555266

In-place transposition of rectangular matrices on accelerators. / Sung, I. Jui; Gómez-Luna, Juan; González-Linares, José María; Guil, Nicolás; Hwu, Wen-Mei W.

In: ACM SIGPLAN Notices, Vol. 49, No. 8, 08.2014, p. 207-218.

Research output: Contribution to journalArticle

Sung, IJ, Gómez-Luna, J, González-Linares, JM, Guil, N & Hwu, W-MW 2014, 'In-place transposition of rectangular matrices on accelerators', ACM SIGPLAN Notices, vol. 49, no. 8, pp. 207-218. https://doi.org/10.1145/2555243.2555266
Sung, I. Jui ; Gómez-Luna, Juan ; González-Linares, José María ; Guil, Nicolás ; Hwu, Wen-Mei W. / In-place transposition of rectangular matrices on accelerators. In: ACM SIGPLAN Notices. 2014 ; Vol. 49, No. 8. pp. 207-218.
@article{85b8797c2e924baaaf5e48d43b8ccdb1,
title = "In-place transposition of rectangular matrices on accelerators",
abstract = "Matrix transposition is an important algorithmic building block for many numeric algorithms such as FFT. It has also been used to convert the storage layout of arrays. With more and more algebra libraries offloaded to GPUs, a high performance in-place transposition becomes necessary. Intuitively, in-place transposition should be a good fit for GPU architectures due to limited available on-board memory capacity and high throughput. However, direct application of CPU in-place transposition algorithms lacks the amount of parallelism and locality required by GPUs to achieve good performance. In this paper we present the first known inplace matrix transposition approach for the GPUs. Our implementation is based on a novel 3-stage transposition algorithm where each stage is performed using an elementary tiled-wise transposition. Additionally, when transposition is done as part of the memory transfer between GPU and host, our staged approach allows hiding transposition overhead by overlap with PCIe transfer. We show that the 3-stage algorithm allows larger tiles and achieves 3X speedup over a traditional 4-stage algorithm, with both algorithms based on our high-performance elementary transpositions on the GPU. We also show our proposed low-level optimizations improve the sustained throughput to more than 20 GB/s. Finally, we propose an asynchronous execution scheme that allows CPU threads to delegate in-place matrix transposition to GPU, achieving a throughput of more than 3.4 GB/s (in cluding data transfers costs), and improving current multithreaded implementations of in-place transposition on CPU.",
keywords = "GPU, In-Place, Transposition",
author = "Sung, {I. Jui} and Juan G{\'o}mez-Luna and Gonz{\'a}lez-Linares, {Jos{\'e} Mar{\'i}a} and Nicol{\'a}s Guil and Hwu, {Wen-Mei W}",
year = "2014",
month = "8",
doi = "10.1145/2555243.2555266",
language = "English (US)",
volume = "49",
pages = "207--218",
journal = "ACM SIGPLAN Notices",
issn = "1523-2867",
publisher = "Association for Computing Machinery (ACM)",
number = "8",

}

TY - JOUR

T1 - In-place transposition of rectangular matrices on accelerators

AU - Sung, I. Jui

AU - Gómez-Luna, Juan

AU - González-Linares, José María

AU - Guil, Nicolás

AU - Hwu, Wen-Mei W

PY - 2014/8

Y1 - 2014/8

N2 - Matrix transposition is an important algorithmic building block for many numeric algorithms such as FFT. It has also been used to convert the storage layout of arrays. With more and more algebra libraries offloaded to GPUs, a high performance in-place transposition becomes necessary. Intuitively, in-place transposition should be a good fit for GPU architectures due to limited available on-board memory capacity and high throughput. However, direct application of CPU in-place transposition algorithms lacks the amount of parallelism and locality required by GPUs to achieve good performance. In this paper we present the first known inplace matrix transposition approach for the GPUs. Our implementation is based on a novel 3-stage transposition algorithm where each stage is performed using an elementary tiled-wise transposition. Additionally, when transposition is done as part of the memory transfer between GPU and host, our staged approach allows hiding transposition overhead by overlap with PCIe transfer. We show that the 3-stage algorithm allows larger tiles and achieves 3X speedup over a traditional 4-stage algorithm, with both algorithms based on our high-performance elementary transpositions on the GPU. We also show our proposed low-level optimizations improve the sustained throughput to more than 20 GB/s. Finally, we propose an asynchronous execution scheme that allows CPU threads to delegate in-place matrix transposition to GPU, achieving a throughput of more than 3.4 GB/s (in cluding data transfers costs), and improving current multithreaded implementations of in-place transposition on CPU.

AB - Matrix transposition is an important algorithmic building block for many numeric algorithms such as FFT. It has also been used to convert the storage layout of arrays. With more and more algebra libraries offloaded to GPUs, a high performance in-place transposition becomes necessary. Intuitively, in-place transposition should be a good fit for GPU architectures due to limited available on-board memory capacity and high throughput. However, direct application of CPU in-place transposition algorithms lacks the amount of parallelism and locality required by GPUs to achieve good performance. In this paper we present the first known inplace matrix transposition approach for the GPUs. Our implementation is based on a novel 3-stage transposition algorithm where each stage is performed using an elementary tiled-wise transposition. Additionally, when transposition is done as part of the memory transfer between GPU and host, our staged approach allows hiding transposition overhead by overlap with PCIe transfer. We show that the 3-stage algorithm allows larger tiles and achieves 3X speedup over a traditional 4-stage algorithm, with both algorithms based on our high-performance elementary transpositions on the GPU. We also show our proposed low-level optimizations improve the sustained throughput to more than 20 GB/s. Finally, we propose an asynchronous execution scheme that allows CPU threads to delegate in-place matrix transposition to GPU, achieving a throughput of more than 3.4 GB/s (in cluding data transfers costs), and improving current multithreaded implementations of in-place transposition on CPU.

KW - GPU

KW - In-Place

KW - Transposition

UR - http://www.scopus.com/inward/record.url?scp=84950137441&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84950137441&partnerID=8YFLogxK

U2 - 10.1145/2555243.2555266

DO - 10.1145/2555243.2555266

M3 - Article

AN - SCOPUS:84950137441

VL - 49

SP - 207

EP - 218

JO - ACM SIGPLAN Notices

JF - ACM SIGPLAN Notices

SN - 1523-2867

IS - 8

ER -