Abstract
In 1680 Newton proposed an algorithm for finding roots of polynomials. His method has since evolved but the core concept remains intact. The convergence of Newton's Method has been widely challenged to be unstable or even chaotic. Here we briefly review this evolution, and consider the question of stable convergence. Newton's method may be applied to any complex analytic function, such as polynomials. Its derivation is based on a Taylor series expansion in the Laplace frequency s=sigma+jmathomega. The convergence of Newton's method depends on the Region of Convergence (RoC). Under certain conditions, nonlinear (NL) limit-cycles appear, resulting in a reduced rate of convergence to a root. Since Newton's method is inherently complex analytic (that is, linear and convergent), it is important to establish the source of this NL divergence, which we show is due to violations of the Nyquist Sampling theorem, also known as aliasing. Here the conditions and method for uniform convergence are explored. The source of the nonlinear limit-cycle is explained in terms of aliasing. We numerically demonstrate that reducing the step-size always results in a more stable convergence. The down side is that it always results in a sub-optimal convergence. It follows that a dynamic step-size would be ideal, by slowly increasing the step-size until it fails, and then decreasing it in small steps until it converges. Finding the optimal step-size is a reasonable solution.
Original language | English (US) |
---|---|
Article number | 10466642 |
Pages (from-to) | 1683-1690 |
Number of pages | 8 |
Journal | IEEE Transactions on Signal Processing |
Volume | 72 |
Issue number | 99 |
DOIs | |
State | Published - Mar 11 2024 |
Keywords
- Aliasing
- Nyquist-sampling
- analytic-roots
- convergence
- limit-cycles
- regions of convergence (RoC)
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering