Skip to article frontmatterSkip to article content

Next steps

The fixed point iteration has been used to prove results about convergence; see Quarteroni et al. Quarteroni et al. (2007) for more details.

Solving nonlinear systems is a complex task. Some classic texts on the subject include Ortega and Rheinbolt Ortega & Rheinboldt (2014), Kelley Kelley (1995), and many others.

Turning to nonlinear least squares, the widely used Levenberg–Marquardt algorithm has a connection to the authors’ home institution. Donald Marquardt completed an MS in mathematics and statistics at the University of Delaware in 1956 entitled “Application of Digital Computers to Statistics.” He developed the method while working at DuPont Corporation’s laboratories; he worked there from 1953 to 1991. He needed a better fitting method for experimental data generated at DuPont, and did just that. The method was published in 1963 in the Journal of the Society for Industrial and Applied Mathematics Marquardt (1963). In a note at the end, he thanks a referee for pointing out that Levenberg had published related ideas elsewhere Levenberg (1944), and then cites that work. He also made a FORTRAN program available that contained an implementation of the program. There’s a lesson there for budding mathematical software designers: If you want people to use your algorithm or method, give away software to do it! An interesting discussion of the evolution of computing during his career at DuPont can be found in an interview Hahn (1995).

Optimization is closely linked to solving nonlinear equations as well. More info can be found, among many other places, in Strang Strang (2007), Quarteroni et al. Quarteroni et al. (2007), and Conn et al. Conn et al. (2009).

Finally, we close with a remark about polynomials. Polynomials are common, and finding their roots is a frequently occurring task. As we saw in Chapter 1, however, the condition number of polynomial rootfinding can be large, and the possibility of complex roots is another challenge. It turns out that while general-purpose rootfinding can work for polynomials, it’s faster and more robust to use something tailored to the task. One of the best known ways is by converting the problem to one of finding the eigenvalues of a related matrix. A recent paper on the subject won an outstanding paper award from the Society for Industrial and Applied Mathematics Aurentz et al. (2015); it used a clever combination of the companion matrix together with a specialized QR factorization to develop a fast and backward stable method. That paper also contains a very good comparison among modern methods for polynomial rootfinding.

References
  1. Quarteroni, A., Sacco, R., & Saleri, F. (2007). Numerical Mathematics. Springer.
  2. Ortega, J. M., & Rheinboldt, W. C. (2014). Iterative Solution of Nonlinear Equations in Several Variables. Elsevier.
  3. Kelley, C. T. (1995). Iterative Methods for Linear and Nonlinear Equations. SIAM.
  4. Marquardt, D. W. (1963). An Algorithm for Least-Squares Estimation of Nonlinear Parameters. Journal of the Society for Industrial and Applied Mathematics, 11(2), 431–441. 10.1137/0111030
  5. Levenberg, K. (1944). A Method for the Solution of Certain Non-Linear Problems in Least Squares. Quarterly of Applied Mathematics, 2(2), 164–168. 10.1090/qam/10666