Sparse matrix computation

Wen mei W. Hwu, David B. Kirk, Izzat El Hajj

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

This chapter introduces the parallel sparse matrix-vector computation pattern. It starts with the basic concepts of sparse matrices, which are matrices in which most of the elements are zeros. It introduces the coordinate list format as a flexible representation that does not store zero matrix elements. It then introduces a kernel based on compressed sparse row data storage for sparse matrices. The ELL format with data padding is then introduced as a regularization technique for improved memory coalescing. Finally, the jagged diagonal storage format based on sorting is introduced to smooth out variation from one row to the next row, allowing further reduction of control divergence and padding overhead in the regularization process. The chapter shows that the same sparse matrix kernel code can exhibit very different performance behavior on different datasets.

Original languageEnglish (US)
Title of host publicationProgramming Massively Parallel Processors
Subtitle of host publicationa Hands-on Approach, Fourth Edition
PublisherElsevier
Pages311-329
Number of pages19
ISBN (Electronic)9780323912310
ISBN (Print)9780323984638
DOIs
StatePublished - Jan 1 2022

Keywords

  • Sparse matrix
  • accessibility
  • compression
  • data-dependent execution behavior
  • flexibility
  • memory access efficiency
  • memory alignment
  • memory coalescing
  • padding
  • regularization
  • space efficiency
  • transposition

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Sparse matrix computation'. Together they form a unique fingerprint.

Cite this