Consider a set of data points, and a curve (model function) that in addition to the variable also depends on parameters, with It is desired to find the vector of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares
The minimum value of S occurs when the gradient is zero. Since the model contains n parameters there are n gradient equations:
In a nonlinear system, the derivatives are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,
Here, k is an iteration number and the vector of increments, is known as the shift vector. At each iteration the model is linearized by approximation to a first-order Taylor polynomial expansion about
Substituting these expressions into the gradient equations, they become
The normal equations are written in matrix notation as
These equations form the basis for the Gauss–Newton algorithm for a non-linear least squares problem.
Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear in may appear with factor of in other articles or the literature.
Extension by weights
When the observations are not equally reliable, a weighted sum of squares may be minimized,
Each element of the diagonal weight matrix W should, ideally, be equal to the reciprocal of the error variance of the measurement. The normal equations are then, more generally,
Geometrical interpretation
In linear least squares the objective function, S, is a quadratic function of the parameters.
Computation
Initial parameter estimates
Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is by computer simulation. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates. Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient.
Solution
Any method among the ones described below can be applied to find a solution.
Convergence criteria
The common sense criterion for convergence is that the sum of squares does not decrease from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is
Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters.
Calculation of the Jacobian by numerical approximation
There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation
Parameter errors, confidence limits, residuals etc.
Some information is given in the corresponding section on the linear least squares page.
Multiple minima
Multiple minima can occur in a variety of circumstances some of which are:
- A parameter is raised to a power of two or more. For example, when fitting data to a Lorentzian curve where is the height, is the position and is the half-width at half height, there are two solutions for the half-width, and which give the same optimal value for the objective function.
- Two parameters can be interchanged without changing the value of the model. A simple example is when the model contains the product of two parameters, since will give the same value as .
- A parameter is in a trigonometric function, such as , which has identical values at . See Levenberg–Marquardt algorithm for an example.
Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum.
When multiple minima exist there is an important consequence: the objective function will have a maximum value somewhere between two minima. The normal equations matrix is not positive definite at a maximum in the objective function, as the gradient is zero and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a maximum will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the band is zero.
Transformation to a linear model
A non-linear model can sometimes be transformed into a linear one. For example, when the model is a simple exponential function,
Another example is furnished by Michaelis–Menten kinetics, used to determine two parameters and :
Algorithms
Gauss–Newton method
The normal equations
Shift-cutting
If divergence occurs, a simple expedient is to reduce the length of the shift vector, , by a fraction, f
When using shift-cutting, the direction of the shift vector remains unchanged. This limits the applicability of the method to situations where the direction of the shift vector is not very different from what it would be if the objective function were approximately quadratic in the parameters,
Marquardt parameter
If divergence occurs and the direction of the shift vector is so far from its "ideal" direction that shift-cutting is not very effective, that is, the fraction, f required to avoid divergence is very small, the direction must be changed. This can be achieved by using the Marquardt parameter. In this method the normal equations are modified
Various strategies have been proposed for the determination of the Marquardt parameter. As with shift-cutting, it is wasteful to optimize this parameter too stringently. Rather, once a value has been found that brings about a reduction in the value of the objective function, that value of the parameter is carried to the next iteration, reduced if possible, or increased if need be. When reducing the value of the Marquardt parameter, there is a cut-off value below which it is safe to set it to zero, that is, to continue with the unmodified Gauss–Newton method. The cut-off value may be set equal to the smallest singular value of the Jacobian. A bound for this value is given by where tr is the trace function.
QR decomposition
The minimum in the sum of squares can be found by a method that does not involve forming the normal equations. The residuals with the linearized model can be written as
The residual vector is left-multiplied by .
This has no effect on the sum of squares since because Q is orthogonal The minimum value of S is attained when the upper block is zero. Therefore, the shift vector is found by solving
These equations are easily solved as R is upper triangular.
Singular value decomposition
A variant of the method of orthogonal decomposition involves singular value decomposition, in which R is diagonalized by further orthogonal transformations.
The relative simplicity of this expression is very useful in theoretical analysis of non-linear least squares. The application of singular value decomposition is discussed in detail in Lawson and Hanson.
Gradient methods
There are many examples in the scientific literature where different methods have been used for non-linear data-fitting problems.
- Inclusion of second derivatives in The Taylor series expansion of the model function. This is Newton's method in optimization. The matrix H is known as the Hessian matrix. Although this model has better convergence properties near to the minimum, it is much worse when the parameters are far from their optimal values. Calculation of the Hessian adds to the complexity of the algorithm. This method is not in general use.
- Davidon–Fletcher–Powell method. This method, a form of pseudo-Newton method, is similar to the one above but calculates the Hessian by successive approximation, to avoid having to use analytical expressions for the second derivatives.
- Steepest descent. Although a reduction in the sum of squares is guaranteed when the shift vector points in the direction of steepest descent, this method often performs poorly. When the parameter values are far from optimal the direction of the steepest descent vector, which is normal (perpendicular) to the contours of the objective function, is very different from the direction of the Gauss–Newton vector. This makes divergence much more likely, especially as the minimum along the direction of steepest descent may correspond to a small fraction of the length of the steepest descent vector. When the contours of the objective function are very eccentric, due to there being high correlation between parameters, the steepest descent iterations, with shift-cutting, follow a slow, zig-zag trajectory towards the minimum.
- Conjugate gradient search. This is an improved steepest descent based method with good theoretical convergence properties, although it can fail on finite-precision digital computers even when used on quadratic problems.
Direct search methods
Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods.
- Alternating variable search. Each parameter is varied in turn by adding a fixed or variable increment to it and retaining the value that brings about a reduction in the sum of squares. The method is simple and effective when the parameters are not highly correlated. It has very poor convergence properties, but may be useful for finding initial parameter estimates.
- Nelder–Mead (simplex) search. A simplex in this context is a polytope of n + 1 vertices in n dimensions; a triangle on a plane, a tetrahedron in three-dimensional space and so forth. Each vertex corresponds to a value of the objective function for a particular set of parameters. The shape and size of the simplex is adjusted by varying the parameters in such a way that the value of the objective function at the highest vertex always decreases. Although the sum of squares may initially decrease rapidly, it can converge to a nonstationary point on quasiconvex problems, by an example of M. J. D. Powell.
More detailed descriptions of these, and other, methods are available, in Numerical Recipes, together with computer code in various languages.