Abstract

Algebraic multigrid (AMG) is often viewed as a scalable (Formula presented.) solver for sparse linear systems. Yet, AMG lacks parallel scalability due to increasingly large costs associated with communication, both in the initial construction of a multigrid hierarchy and in the iterative solve phase. This work introduces a parallel implementation of AMG that reduces the cost of communication, yielding improved parallel scalability. It is common in Message Passing Interface (MPI), particularly in the MPI-everywhere approach, to arrange inter-process communication, so that communication is transported regardless of the location of the send and receive processes. Performance tests show notable differences in the cost of intra- and internode communication, motivating a restructuring of communication. In this case, the communication schedule takes advantage of the less costly intra-node communication, reducing both the number and the size of internode messages. Node-centric communication extends to the range of components in both the setup and solve phase of AMG, yielding an increase in the weak and strong scaling of the entire method.

Original languageEnglish (US)
Pages (from-to)547-561
Number of pages15
JournalInternational Journal of High Performance Computing Applications
Volume34
Issue number5
DOIs
StatePublished - Sep 1 2020

Keywords

  • Parallel
  • algebraic multigrid
  • multigrid
  • sparse matrix

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Reducing communication in algebraic multigrid with multi-step node aware communication'. Together they form a unique fingerprint.

Cite this