Modified conjugate gradient method for the solution of Ax = b

Sigal Gottlieb, Paul F. Fischer

Research output: Contribution to journalArticlepeer-review

Abstract

In this note, we examine a modified conjugate gradient procedure for solving Ax = b in which the approximation space is based upon the Krylov space (K k √A, b k) associated with √A and b. We show that, given initial vectors b and √A b (possibly computed at some expense), the best fit solution in K k √A, b k can be computed using a finite-term recurrence requiring only one multiplication by A per iteration. The initial convergence rate appears, as expected, to be twice as fast as that of the standard conjugate gradient method, but stability problems cause the convergence to be degraded.

Original languageEnglish (US)
Pages (from-to)173-183
Number of pages11
JournalJournal of Scientific Computing
Volume13
Issue number2
DOIs
StatePublished - Jun 1 1998
Externally publishedYes

Keywords

  • Conjugate gradient method
  • Convergence rate
  • Krylov space
  • Modified conjugate gradient method
  • Stability

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Numerical Analysis
  • Engineering(all)
  • Computational Theory and Mathematics
  • Computational Mathematics
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Modified conjugate gradient method for the solution of Ax = b'. Together they form a unique fingerprint.

Cite this