NAG Library e02 - Curve and Surface Fitting
|
Scope of the Chapter
|
|
The main aim of this chapter is to assist the user in finding a function which approximates a set of data points. Typically the data contain random errors, as of experimental measurement, which need to be smoothed out. To seek an approximation to the data, it is first necessary to specify for the approximating function a mathematical form (a polynomial, for example) which contains a number of unspecified coefficients: the appropriate fitting function then derives for the coefficients the values which provide the best fit of that particular form. The chapter deals mainly with curve and surface fitting (i.e., fitting with functions of one and of two variables) when a polynomial or a cubic spline is used as the fitting function, since these cover the most common needs. However, fitting with other functions and/or more variables can be undertaken by means of general linear or nonlinear functions (some of which are contained in other chapters) depending on whether the coefficients in the function occur linearly or nonlinearly. Cases where a graph rather than a set of data points is given can be treated simply by first reading a suitable set of points from the graph.
The chapter also contains functions for evaluating, differentiating and integrating polynomial and spline curves and surfaces, once the numerical values of their coefficients have been determined.
There is, too, a function for computing a Padé approximant of a mathematical function (see Sections [Padé Approximants] and [Padé Approximants]).
|
|
Background to the Problems
|
|
|
Preliminary Considerations
|
|
In the curve-fitting problems considered in this chapter, we have a dependent variable and an independent variable , and we are given a set of data points , for . The preliminary matters to be considered in this section will, for simplicity, be discussed in this context of curve-fitting problems. In fact, however, these considerations apply equally well to surface and higher-dimensional problems. Indeed, the discussion presented carries over essentially as it stands if, for these cases, we interpret as a vector of several independent variables and correspondingly each as a vector containing the th data value of each independent variable.
We wish, then, to approximate the set of data points as closely as possible with a specified function, say, which is as smooth as possible: may, for example, be a polynomial. The requirements of smoothness and closeness conflict, however, and a balance has to be struck between them. Most often, the smoothness requirement is met simply by limiting the number of coefficients allowed in the fitting function – for example, by restricting the degree in the case of a polynomial. Given a particular number of coefficients in the function in question, the fitting functions of this chapter determine the values of the coefficients such that the "distance" of the function from the data points is as small as possible. The necessary balance is struck by the user comparing a selection of such fits having different numbers of coefficients. If the number of coefficients is too low, the approximation to the data will be poor. If the number is too high, the fit will be too close to the data, essentially following the random errors and tending to have unwanted fluctuations between the data points. Between these extremes, there is often a group of fits all similarly close to the data points and then, particularly when least-squares polynomials are used, the choice is clear: it is the fit from this group having the smallest number of coefficients.
The above process can be seen as the user minimizing the smoothness measure (i.e., the number of coefficients) subject to the distance from the data points being acceptably small. Some of the functions, however, do this task themselves. They use a different measure of smoothness (in each case one that is continuous) and minimize it subject to the distance being less than a threshold specified by the user. This is a much more automatic process, requiring only some experimentation with the threshold.
|
Fitting criteria: norms
|
|
A measure of the above "distance" between the set of data points and the function is needed. The distance from a single data point to the function can simply be taken as
(1)
and is called the residual of the point. (With this definition, the residual is regarded as a function of the coefficients contained in ; however, the term is also used to mean the particular value of which corresponds to the fitted values of the coefficients.) However, we need a measure of distance for the set of data points as a whole. Three different measures are used in the different functions (which measure to select, according to circumstances, is discussed later in this sub-section). With defined in (1), these measures, or norms, are
(2)
(3)
and
(4)
respectively the norm, the norm and the norm.
Minimization of one or other of these norms usually provides the fitting criterion, the minimization being carried out with respect to the coefficients in the mathematical form used for : with respect to the for example if the mathematical form is the power series in (8) below. The fit which results from minimizing (2) is known as the fit, or the fit in the norm: that which results from minimizing (3) is the fit, the well-known least-squares fit (minimizing (3) is equivalent to minimizing the square of (3), i.e., the sum of squares of residuals, and it is the latter which is used in practice), and that from minimizing (4) is the , or minimax, fit.
Strictly speaking, implicit in the use of the above norms are the statistical assumptions that the random errors in the are independent of one another and that any errors in the are negligible by comparison. From this point of view, the use of the norm is appropriate when the random errors in the have a normal distribution, and the norm is appropriate when they have a rectangular distribution, as when fitting a table of values rounded to a fixed number of decimal places. The norm is appropriate when the error distribution has its frequency function proportional to the negative exponential of the modulus of the normalised error – not a common situation.
However, the user is often indifferent to these statistical considerations, and simply seeks a fit which can be assessed by inspection, perhaps visually from a graph of the results. In this event, the norm is particularly appropriate when the data are thought to contain some "wild" points (since fitting in this norm tends to be unaffected by the presence of a small number of such points), though of course in simple situations the user may prefer to identify and reject these points. The norm should be used only when the maximum residual is of particular concern, as may be the case for example when the data values have been obtained by accurate computation, as of a mathematical function. Generally, however, a function based on least-squares should be preferred, as being computationally faster and usually providing more information on which to assess the results. In many problems the three fits will not differ significantly for practical purposes.
Some of the functions based on the norm do not minimize the norm itself but instead minimize some (intuitively acceptable) measure of smoothness subject to the norm being less than a user-specified threshold. These functions fit with cubic or bicubic splines (see (10) and (14) below) and the smoothing measures relate to the size of the discontinuities in their third derivatives. A much more automatic fitting procedure follows from this approach.
|
|
Weighting of data points
|
|
The use of the above norms also assumes that the data values are of equal (absolute) accuracy. Some of the functions enable an allowance to be made to take account of differing accuracies. The allowance takes the form of "weights" applied to the -values so that those values known to be more accurate have a greater influence on the fit than others. These weights, to be supplied by the user, should be calculated from estimates of the absolute accuracies of the -values, these estimates being expressed as standard deviations, probable errors or some other measure which has the same dimensions as . Specifically, for each the corresponding weight should be inversely proportional to the accuracy estimate of . For example, if the percentage accuracy is the same for all , then the absolute accuracy of is proportional to (assuming to be positive, as it usually is in such cases) and so , for , for an arbitrary positive constant . (This definition of weight is stressed because often weight is defined as the square of that used here.) The norms (2), (3) and (4) above are then replaced respectively by
(5)
(6)
and
(7)
Again it is the square of (6) which is used in practice rather than (6) itself.
|
|
|
Curve Fitting
|
|
When, as is commonly the case, the mathematical form of the fitting function is immaterial to the problem, polynomials and cubic splines are to be preferred because their simplicity and ease of handling confer substantial benefits. The cubic spline is the more versatile of the two. It consists of a number of cubic polynomial segments joined end to end with continuity in first and second derivatives at the joins. The third derivative at the joins is in general discontinuous. The -values of the joins are called knots, or, more precisely, interior knots. Their number determines the number of coefficients in the spline, just as the degree determines the number of coefficients in a polynomial.
|
Representation of cubic splines
|
|
A cubic spline is represented in the form
(10)
where , for , is a normalised cubic B-spline (see Hayes (1974)). This form, also, has advantages of computational speed and accuracy over alternative representations.
|
|
|
Surface Fitting
|
|
There are now two independent variables, and we shall denote these by and . The dependent variable, which was denoted by in the curve-fitting case, will now be denoted by . (This is a rather different notation from that indicated for the general-dimensional problem in the first paragraph of Section [Preliminary Considerations], but it has some advantages in presentation.)
Again, in the absence of contrary indications in the particular application being considered, polynomials and splines are the approximating functions most commonly used.
|
Representation of bivariate polynomials
|
|
The type of bivariate polynomial currently considered in the chapter can be represented in either of the two forms
(11)
and
(12)
where is the Chebyshev polynomial of the first kind of degree in the argument (see page 9 of Cox and Hayes (1973)), and correspondingly for . The prime on the two summation signs, following standard convention, indicates that the first term in each sum is halved, as shown for one variable in equation (9). The two forms (11) and (12) are mathematically equivalent, but again the Chebyshev form is to be preferred on numerical grounds, as discussed in Section [Representation of polynomials].
|
|
Bicubic splines: definition and representation
|
|
The bicubic spline is defined over a rectangle in the plane, the sides of being parallel to the - and -axes. is divided into rectangular panels, again by lines parallel to the axes. Over each panel the bicubic spline is a bicubic polynomial, that is it takes the form
(13)
Each of these polynomials joins the polynomials in adjacent panels with continuity up to the second derivative. The constant -values of the dividing lines parallel to the -axis form the set of interior knots for the variable , corresponding precisely to the set of interior knots of a cubic spline. Similarly, the constant -values of dividing lines parallel to the -axis form the set of interior knots for the variable . Instead of representing the bicubic spline in terms of the above set of bicubic polynomials, however, it is represented, for the sake of computational speed and accuracy, in the form
(14)
where , for , and , for , are normalised B-splines (see Hayes and Halliday (1974) for further details of bicubic splines and Hayes (1974) for normalised B-splines).
|
|
|
General Linear and Nonlinear Fitting Functions
|
|
We have indicated earlier that, unless the data-fitting application under consideration specifically requires some other type of fit, a polynomial or a spline is usually to be preferred. Special functions for these types, in one and in two variables, are provided in this chapter. When the application does specify some other form of fitting, however, it may be treated by a function which deals with a general linear function, or by one for a general nonlinear function, depending on whether the coefficients occur linearly or nonlinearly.
The general linear fitting function can be written in the form
(15)
where is a vector of one or more independent variables, and the are any given functions of these variables (though they must be linearly independent of one another if there is to be the possibility of a unique solution to the fitting problem). This is not intended to imply that each is necessarily a function of all the variables: we may have, for example, that each is a function of a different single variable, and even that one of the is a constant. All that is required is that a value of each can be computed when a value of each independent variable is given.
When the fitting function is not linear in its coefficients, no more specific representation is available in general than itself. However, we shall find it helpful later on to indicate the fact that contains a number of coefficients (to be determined by the fitting process) by using instead the notation , where denotes the vector of coefficients. An example of a nonlinear fitting function is
(16)
which is in one variable and contains five coefficients. Note that here, as elsewhere in this Chapter Introduction, we use the term "coefficients" to include all the quantities whose values are to be determined by the fitting process, not just those which occur linearly. We may observe that it is only the presence of the coefficients and which makes the form (16) nonlinear. If the values of these two coefficients were known beforehand, (16) would instead be a linear function which, in terms of the general linear form (15), has and
We may note also that polynomials and splines, such as (9) and (14), are themselves linear in their coefficients. Thus if, when fitting with these functions, a suitable special function is not available (as when more than two independent variables are involved or when fitting in the norm), it is appropriate to use a function designed for a general linear function.
|
|
Constrained Problems
|
|
So far, we have considered only fitting processes in which the values of the coefficients in the fitting function are determined by an unconstrained minimization of a particular norm. Some fitting problems, however, require that further restrictions be placed on the determination of the coefficient values. Sometimes these restrictions are contained explicitly in the formulation of the problem in the form of equalities or inequalities which the coefficients, or some function of them, must satisfy. For example, if the fitting function contains a term , it may be required that . Often, however, the equality or inequality constraints relate to the value of the fitting function or its derivatives at specified values of the independent variable(s), but these too can be expressed in terms of the coefficients of the fitting function, and it is appropriate to do this if a general linear or nonlinear function is being used. For example, if the fitting function is that given in (10), the requirement that the first derivative of the function at be non-negative can be expressed as
(17)
where the prime denotes differentiation with respect to and each derivative is evaluated at . On the other hand, if the requirement had been that the derivative at be exactly zero, the inequality sign in (17) would be replaced by an equality.
Functions which provide a facility for minimizing the appropriate norm subject to such constraints are discussed in Section [Constraints].
|
|
|
Recommendations on Choice and Use of Available Functions
|
|
|
General
|
|
The choice of a function to treat a particular fitting problem will depend first of all on the fitting function and the norm to be used. Unless there is good reason to the contrary, the fitting function should be a polynomial or a cubic spline (in the appropriate number of variables) and the norm should be the norm (leading to the least-squares fit). If some other function is to be used, the choice of function will depend on whether the function is nonlinear (in which case see Section [Nonlinear functions]) or linear in its coefficients (see Section [General linear functions]), and, in the latter case, on whether the , or norm is to be used. The latter section is appropriate for polynomials and splines, too, if the or norm is preferred, with one exception: there is a special function for fitting polynomial curves in the unweighted norm (see Section [Minimax space polynomials]).
In the case of a polynomial or cubic spline, if there is only one independent variable, the user should choose a spline (see Section [Cubic Spline Curves]) when the curve represented by the data is of complicated form, perhaps with several peaks and troughs. When the curve is of simple form, first try a polynomial (see Section [Polynomial Curves]) of low degree, say up to degree 5 or 6, and then a spline if the polynomial fails to provide a satisfactory fit. (Of course, if third-derivative discontinuities are unacceptable, a polynomial is the only choice.) If the problem is one of surface fitting, the polynomial function (see Section [Least-squares polynomials]) should be tried first if the data arrangement happens to be appropriate, otherwise one of the spline functions (see Section [Automatic fitting with bicubic splines]). If the problem has more than two independent variables, it may be treated by the general linear function in Section [General linear functions], again using a polynomial in the first instance.
Another factor which affects the choice of function is the presence of constraints, as previously discussed on Section [Constrained Problems]. Indeed this factor is likely to be overriding at present, because of the limited number of functions which have the necessary facility. Consequently those functions have been grouped together for discussion in Section [Constraints].
|
Data considerations
|
|
A satisfactory fit cannot be expected by any means if the number and arrangement of the data points do not adequately represent the character of the underlying relationship: sharp changes in behaviour, in particular, such as sharp peaks, should be well covered. Data points should extend over the whole range of interest of the independent variable(s): extrapolation outside the data ranges is most unwise. Then, with polynomials, it is advantageous to have additional points near the ends of the ranges, to counteract the tendency of polynomials to develop fluctuations in these regions. When, with polynomial curves, the user can precisely choose the -values of the data, the special points defined in Section [Least-squares polynomials: selected data points] should be selected. With polynomial surfaces, each of these same -values should, where possible, be combined with each of a corresponding set of -values (not necessarily with the same value of ), thus forming a rectangular grid of -values. With splines the choice is less critical as long as the character of the relationship is adequately represented. All fits should be tested graphically before accepting them as satisfactory.
For this purpose it should be noted that it is not sufficient to plot the values of the fitted function only at the data values of the independent variable(s); at the least, its values at a similar number of intermediate points should also be plotted, as unwanted fluctuations may otherwise go undetected. Such fluctuations are the less likely to occur the lower the number of coefficients chosen in the fitting function. No firm guide can be given, but as a rough rule, at least initially, the number of coefficients should not exceed half the number of data points (points with equal or nearly equal values of the independent variable, or both independent variables in surface fitting, counting as a single point for this purpose). However, the situation may be such, particularly with a small number of data points, that a satisfactorily close fit to the data cannot be achieved without unwanted fluctuations occurring. In such cases, it is often possible to improve the situation by a transformation of one or more of the variables, as discussed in the next section: otherwise it will be necessary to provide extra data points. Further advice on curve fitting is given in Cox and Hayes (1973) and, for polynomials only, in Hayes (1970). Much of the advice applies also to surface fitting; see also the function documents.
|
|
Transformation of variables
|
|
Before starting the fitting, consideration should be given to the choice of a good form in which to deal with each of the variables: often it will be satisfactory to use the variables as they stand, but sometimes the use of the logarithm, square root, or some other function of a variable will lead to a better-behaved relationship. This question is customarily taken into account in preparing graphs and tables of a relationship and the same considerations apply when curve or surface fitting. The practical context will often give a guide. In general, it is best to avoid having to deal with a relationship whose behaviour in one region is radically different from that in another. A steep rise at the left-hand end of a curve, for example, can often best be treated by curve fitting in terms of with some suitable value of the constant . A case when such a transformation gave substantial benefit is discussed in page 60 of Hayes (1970). According to the features exhibited in any particular case, transformation of either dependent variable or independent variable(s) or both may be beneficial. When there is a choice it is usually better to transform the independent variable(s): if the dependent variable is transformed, the weights attached to the data points must be adjusted. Thus (denoting the dependent variable by , as in the notation for curves) if the to be fitted have been obtained by a transformation from original data values , with weights , for , we must take
(18)
where the derivative is evaluated at . Strictly, the transformation of and the adjustment of weights are valid only when the data errors in the are small compared with the range spanned by the , but this is usually the case.
|
|
|
Polynomial Curves
|
|
|
Least-squares polynomials: arbitrary data points
|
|
e02adc (nag_1d_cheb_fit) fits to arbitrary data points, with arbitrary weights, polynomials of all degrees up to a maximum degree , which is a choice. If the user is seeking only a low-degree polynomial, up to degree 5 or 6 say, is an appropriate value, providing there are about 20 data points or more. To assist in deciding the degree of polynomial which satisfactorily fits the data, the function provides the root-mean-square residual for all degrees . In a satisfactory case, these will decrease steadily as increases and then settle down to a fairly constant value, as shown in the example
If the values settle down in this way, it indicates that the closest polynomial approximation justified by the data has been achieved. The degree which first gives the approximately constant value of (degree 5 in the example) is the appropriate degree to select. (Users who are prepared to accept a fit higher than sixth degree should simply find a high enough value of to enable the type of behaviour indicated by the example to be detected: thus they should seek values of for which at least 4 or 5 consecutive values of are approximately the same.) If the degree were allowed to go high enough, would, in most cases, eventually start to decrease again, indicating that the data points are being fitted too closely and that undesirable fluctuations are developing between the points. In some cases, particularly with a small number of data points, this final decrease is not distinguishable from the initial decrease in . In such cases, users may seek an acceptable fit by examining the graphs of several of the polynomials obtained. Failing this, they may (a) seek a transformation of variables which improves the behaviour, (b) try fitting a spline, or (c) provide more data points. If data can be provided simply by drawing an approximating curve by hand and reading points from it, use the points discussed in Section [Least-squares polynomials: selected data points].
|
|
Least-squares polynomials: selected data points
|
|
When users are at liberty to choose the -values of data points, such as when the points are taken from a graph, it is most advantageous when fitting with polynomials to use the values , for for some value of , a suitable value for which is discussed at the end of this section. Note that these relate to the variable after it has been normalised so that its range of interest is to . e02adc (nag_1d_cheb_fit) may then be used as in Section [Least-squares polynomials: arbitrary data points] to seek a satisfactory fit. However, if the ordinate values are of equal weight, as would often be the case when they are read from a graph, e02afc (nag_1d_cheb_interp_fit) is to be preferred, as being simpler to use and faster. This latter algorithm provides the coefficients , for , in the Chebyshev series form of the polynomial of degree which interpolates the data. In a satisfactory case, the later coefficients in this series, after some initial significant ones, will exhibit a random behaviour, some positive and some negative, with a size about that of the errors in the data or less. All these "random" coefficients should be discarded, and the remaining (initial) terms of the series be taken as the approximating polynomial. This truncated polynomial is a least-squares fit to the data, though with the point at each end of the range given half the weight of each of the other points. The following example illustrates a case in which degree 5 or perhaps 6 would be chosen for the approximating polynomial.
Basically, the value of used needs to be large enough to exhibit the type of behaviour illustrated in the above example. A value of 16 is suggested as being satisfactory for very many practical problems, the required cosine values for this value of being given on page 11 of Cox and Hayes (1973). If a satisfactory fit is not obtained, a spline fit should be tried, or, if the user is prepared to accept a higher degree of polynomial, should be increased: doubling is an advantageous strategy, since the set of values , for , contains all the values of , for , so that the old data set will then be re-used in the new one. Thus, for example, increasing from 16 to 32 will require only 16 new data points, a smaller number than for any other increase of . If data points are particularly expensive to obtain, a smaller initial value than 16 may be tried, provided the user is satisfied that the number is adequate to reflect the character of the underlying relationship. Again, the number should be doubled if a satisfactory fit is not obtained.
|
|
Minimax space polynomials
|
|
The polynomial of chosen degree which is a minimax space fit to arbitrary data points, with possibly unequal weights, can be determined by treating the polynomial as a general linear function and fitting with e02gcc (nag_linf_fit). To arrive at a satisfactory degree it will be necessary to try several different degrees and examine the results graphically. Initial guidance can be obtained from the value of the maximum residual: this will vary with the degree of the polynomial in very much the same way as does in least-squares fitting, but it is much more expensive to investigate this behaviour in the same detail.
|
|
|
Cubic Spline Curves
|
|
|
Least-squares cubic splines
|
|
e02bac (nag_1d_spline_fit_knots) fits to arbitrary data points, with arbitrary weights, a cubic spline with interior knots specified by the user. The choice of these knots so as to give an acceptable fit must largely be a matter of trial and error, though with a little experience a satisfactory choice can often be made after one or two trials. It is usually best to start with a small number of knots (too many will result in unwanted fluctuations in the fit, or even in there being no unique solution) and, examining the fit graphically at each stage, to add a few knots at a time at places where the fit is particularly poor. Moving the existing knots towards these places will also often improve the fit. In regions where the behaviour of the curve underlying the data is changing rapidly, closer knots will be needed than elsewhere. Otherwise, positioning is not usually very critical and equally-spaced knots are often satisfactory. See also the next section, however.
A useful feature of the function is that it can be used in applications which require the continuity to be less than the normal continuity of the cubic spline. For example, the fit may be required to have a discontinuous slope at some point in the range. This can be achieved by placing three coincident knots at the given point. Similarly a discontinuity in the second derivative at a point can be achieved by placing two knots there. Analogy with these discontinuous cases can provide guidance in more usual cases: for example, just as three coincident knots can produce a discontinuity in slope, so three close knots can produce a rapid change in slope. The closer the knots are, the more rapid can the change be.
An example set of data is given in Figure 1. It is a rather tricky set, because of the scarcity of data on the right, but it will serve to illustrate some of the above points and to show some of the dangers to be avoided. Three interior knots (indicated by the vertical lines at the top of the diagram) are chosen as a start. We see that the resulting curve is not steep enough in the middle and fluctuates at both ends, severely on the right. The spline is unable to cope with the shape and more knots are needed.
In Figure 2, three knots have been added in the centre, where the data shows a rapid change in behaviour, and one further out at each end, where the fit is poor. The fit is still poor, so a further knot is added in this region and, in Figure 3, disaster ensues in rather spectacular fashion.
The reason is that, at the right-hand end, the fits in Figures 1 and 2 have been interpreted as poor simply because of the fluctuations about the curve underlying the data (or what it is naturally assumed to be). But the fitting process knows only about the data and nothing else about the underlying curve, so it is important to consider only closeness to the data when deciding goodness of fit.
Thus, in Figure 1, the curve fits the last two data points quite well compared with the fit elsewhere, so no knot should have been added in this region. In Figure 2, the curve goes exactly through the last two points, so a further knot is certainly not needed here.
Figure 4 shows what can be achieved without the extra knot on each of the flat regions. Remembering that within each knot interval the spline is a cubic polynomial, there is really no need to have more than one knot interval covering each flat region.
What we have, in fact, in Figures 2 and 3 is a case of too many knots (so too many coefficients in the spline equation) for the number of data points. The warning in the second paragraph of Section [Preliminary Considerations] was that the fit will then be too close to the data, tending to have unwanted fluctuations between the data points. The warning applies locally for splines, in the sense that, in localities where there are plenty of data points, there can be a lot of knots, as long as there are few knots where there are few points, especially near the ends of the interval. In the present example, with so few data points on the right, just the one extra knot in Figure 2 is too many! The signs are clearly present, with the last two points fitted exactly (at least to the graphical accuracy and actually much closer than that) and fluctuations within the last two knot-intervals (see Figure 1, where only the final point is fitted exactly and one of the wobbles spans several data points).
The situation in Figure 3 is different. The fit, if computed exactly, would still pass through the last two data points, with even more violent fluctuations. However, the problem has become so ill-conditioned that all accuracy has been lost. Indeed, if the last interior knot were moved a tiny amount to the right, there would be no unique solution and an error message would have been caused. Near-singularity is, sadly, not picked up by the function, but can be spotted readily in a graph, as Figure 3. B-spline coefficients becoming large, with alternating signs, is another indication. However, it is better to avoid such situations, firstly by providing, whenever possible, data adequately covering the range of interest, and secondly by placing knots only where there is a reasonable amount of data.
The example here could, in fact, have utilised from the start the observation made in the second paragraph of this section, that three close knots can produce a rapid change in slope. The example has two such rapid changes and so requires two sets of three close knots (in fact, the two sets can be so close that one knot can serve in both sets, so only five knots prove sufficient in Figure 4). It should be noted, however, that the rapid turn occurs within the range spanned by the three knots. This is the reason that the six knots in Figure 2 are not satisfactory as they do not quite span the two turns.
Some more examples to illustrate the choice of knots are given in Cox and Hayes (1973).
|
|
Automatic fitting with cubic splines
|
|
e02bec (nag_1d_spline_fit) fits cubic splines to arbitrary data points with arbitrary weights but itself chooses the number and positions of the knots. The user has to supply only a threshold for the sum of squares of residuals. The function first builds up a knot set by a series of trial fits in the norm. Then, with the knot set decided, the final spline is computed to minimize a certain smoothing measure subject to satisfaction of the chosen threshold. Thus it is easier to use than e02bac (nag_1d_spline_fit_knots) (see the previous section), requiring only some experimentation with this threshold. It should therefore be first choice unless the user has a preference for the ordinary least-squares fit or, for example, wishes to experiment with knot positions, trying to keep their number down (e02bec (nag_1d_spline_fit) aims only to be reasonably frugal with knots).
|
|
|
Polynomial and Spline Surfaces
|
|
|
Automatic fitting with bicubic splines
|
|
e02ddc (nag_2d_spline_fit_scat) fits bicubic splines to arbitrary data points with arbitrary weights but chooses the knot sets itself. The user has to supply only a threshold for the sum of squares of residuals. Just like the automatic curve, e02bec (nag_1d_spline_fit) (see Section [Automatic fitting with cubic splines]), e02ddc (nag_2d_spline_fit_scat) then builds up the knot sets and finally fits a spline minimizing a smoothing measure subject to satisfaction of the threshold. Again, this easier to use function is normally to be preferred, at least in the first instance.
e02dcc (nag_2d_spline_fit_grid) is a very similar function to e02ddc (nag_2d_spline_fit_scat) but deals with data points of equal weight which lie on a rectangular mesh in the plane. This kind of data allows a very much faster computation and so is to be preferred when applicable. Substantial departures from equal weighting can be ignored if the user is not concerned with statistical questions, though the quality of the fit will suffer if this is taken too far. In such cases, the user should revert to e02ddc (nag_2d_spline_fit_scat).
|
|
|
General Linear and Nonlinear Fitting Functions
|
|
|
General linear functions
|
|
For the general linear function (15), functions are available for fitting in all three norms. The least-squares methods (which are to be preferred unless there is good reason to use another norm – see Section [Fitting criteria: norms]) are discussed in Section [LQ factorization] in the f08 Chapter Introduction (and see e.g., f08aec (nag_dgeqrf)). The function is e02gcc (nag_linf_fit) and, for the norm e02gac (nag_lone_fit) is provided.
All the above functions are essentially linear algebra functions, and in considering their use we need to view the fitting process in a slightly different way from hitherto. Taking to be the dependent variable and the vector of independent variables, we have, as for equation (1) but with each now a vector,
Substituting for the general linear form (15), we can write this as
(19)
Thus we have a system of linear equations in the coefficients . Usually, in writing these equations, the are omitted and simply taken as implied. The system of equations is then described as an overdetermined system (since we must have if there is to be the possibility of a unique solution to our fitting problem), and the fitting process of computing the to minimize one or other of the norms (2), (3) and (4) can be described, in relation to the system of equations, as solving the overdetermined system in that particular norm. In matrix notation, the system can be written as
(20)
where is the by matrix whose element in row and column is , for ; . The vectors and respectively contain the coefficients and the data values .
These functions, however, use the standard notation of linear algebra, the overdetermined system of equations being denoted by
(21)
The correspondence between this notation and that which we have used for the data-fitting problem (20) is therefore given by
(22)
Note that the norms used by these functions are the unweighted norms (2), (3) and (4). If the user wishes to apply weights to the data points, that is to use the norms (5), (6) or (7), the equivalences (22) should be replaced by
where is a diagonal matrix with as the th diagonal element. Here , for , is the weight of the th data point as defined in Section [Weighting of data points].
|
|
Nonlinear functions
|
|
Functions for fitting with a nonlinear function in the norm are provided in Chapter e04. The Chapter e04 should be consulted for the appropriate choice of function. Again, however, the notation adopted is different from that we have used for data fitting. In the latter, we denote the fitting function by , where is the vector of independent variables and is the vector of coefficients, whose values are to be determined. The squared norm, to be minimized with respect to the elements of , is then
(23)
where is the th data value of the dependent variable, is the vector containing the th values of the independent variables, and is the corresponding weight as defined in Section [Weighting of data points].
On the other hand, in the nonlinear least-squares functions of Chapter e04, the function to be minimized is denoted by
(24)
the minimization being carried out with respect to the elements of the vector . The correspondence between the two notations is given by
Note especially that the vector of variables of the nonlinear least-squares functions is the vector of coefficients of the data-fitting problem, and in particular that, if the selected function requires derivatives of the to be provided, these are derivatives of with respect to the coefficients of the data-fitting problem.
|
|
|
Evaluation, Differentiation and Integration
|
|
Functions are available to evaluate, differentiate and integrate polynomials in Chebyshev-series form and cubic or bicubic splines in B-spline form. These polynomials and splines may have been produced by the various fitting functions or, in the case of polynomials, from prior calls of the differentiation and integration functions themselves.
e02aec (nag_1d_cheb_eval) and e02akc (nag_1d_cheb_eval2) evaluate polynomial curves: the latter has a longer parameter list but does not require the user to normalise the values of the independent variable and can accept coefficients which are not stored in contiguous locations. e02cbc (nag_2d_cheb_eval) evaluates polynomial surfaces, e02bbc (nag_1d_spline_evaluate) cubic spline curves, and e02dec (nag_2d_spline_eval) and e02dfc (nag_2d_spline_eval_rect) bicubic spline surfaces.
Differentiation and integration of polynomial curves are carried out by e02ahc (nag_1d_cheb_deriv) and e02ajc (nag_1d_cheb_intg) respectively. The results are provided in Chebyshev-series form and so repeated differentiation and integration are catered for. Values of the derivative or integral can then be computed using the appropriate evaluation function. Polynomial surfaces can be treated by a sequence of calls of one or other of the same two functions, differentiating or integrating the form (12) piece by piece. For example, if, for some given value of , the coefficients , for , are supplied to e02ahc (nag_1d_cheb_deriv), we obtain coefficients say, for , which are the coefficients in the derivative with respect to of the polynomial
If this is repeated for all values of , we obtain all the coefficients in the derivative of the surface with respect to , namely
(25)
The derivative of (12), or of (25), with respect to can be obtained in a corresponding manner. In the latter case, for example, for each value of in turn we supply the coefficients , to the function. Values of the resulting polynomials, such as (25), can subsequently be computed using e02cbc (nag_2d_cheb_eval). It is important, however, to note one exception: the process described will not give valid results for differentiating or integrating a surface with respect to if the normalisation of was made dependent upon , an option which is available in the fitting function e02cac (nag_2d_cheb_fit_lines).
For splines the differentiation and integration functions provided are of a different nature from those for polynomials. e02bcc (nag_1d_spline_deriv) provides values of a cubic spline curve and its first three derivatives (the rest, of course, are zero) at a given value of . e02bdc (nag_1d_spline_intg) computes the value of the definite integral of a cubic spline over its whole range. Again the functions can be applied to surfaces, this time of the form (14). For example, if, for each value of in turn, the coefficients , for , are supplied to e02bcc (nag_1d_spline_deriv) with and on each occasion we select from the output the value of the second derivative, say, and if the whole set of are then supplied to the same function with , the output will contain all the values at of
Equally, if after each of the first calls of e02bcc (nag_1d_spline_deriv) we had selected the function value (e02bbc (nag_1d_spline_evaluate) would also provide this) instead of the second derivative and we had supplied these values to e02bdc (nag_1d_spline_intg), the result obtained would have been the value of
where and are the end-points of the interval over which the spline was defined.
|
|
|
Decision Trees
|
|
|
Note: these Decision Trees are concerned with unconstrained fitting; for constrained fitting, consult Section [Constraints].
|
|
Tree 1
|
|
Q:1 Does the user have reason to choose a type of fitting function other than polynomials or spline?
Q:2 Do the coefficients in the form occur linearly?
|
See least-squares functions in Chapter e04
|
Q:3 Does the user prefer the norm?
Q:4 Does the user prefer the norm?
|
See the least-squares functions in Chapter f08
|
Q:5 Does the problem have more than two independent variables?
Q:6 Does the user prefer the norm?
Q:7 Does the user prefer the norm?
|
See the least-squares functions in Chapter f08
|
Q:8 Does the user wish to use the norm?
Q:9 Is there just one independent variable (curve fitting)?
Q:10 Does the data have equal weights and lie on a rectangular mesh?
Q:11 Are the data on lines? (see Section [Least-squares polynomials])
|
|
Tree 2
|
|
Q:1 Does the user wish to use the norm?
Q:2 Does the user prefer polynomials?
Q:3 Can the user choose -values of data points?
Q:4 Choose -values defined in Section [Least-squares polynomials: selected data points]. Have data points different weights?
Q:5 Does the user wish to use norm?
|
|
|
Index
|
|
Automatic fitting,
with bicubic splines: e02dcc (nag_2d_spline_fit_grid)
with bicubic splines: e02ddc (nag_2d_spline_fit_scat)
with cubic splines: e02bec (nag_1d_spline_fit)
Data on lines: e02cac (nag_2d_cheb_fit_lines)
Data on rectangular mesh: e02dcc (nag_2d_spline_fit_grid)
Differentiation,
of cubic splines: e02bcc (nag_1d_spline_deriv)
of polynomials: e02ahc (nag_1d_cheb_deriv)
Evaluation,
of bicubic splines: e02dec (nag_2d_spline_eval)
of bicubic splines: e02dfc (nag_2d_spline_eval_rect)
of cubic splines: e02bbc (nag_1d_spline_evaluate)
of cubic splines and derivatives: e02bcc (nag_1d_spline_deriv)
of definite integral of cubic splines: e02bdc (nag_1d_spline_intg)
of polynomials,
in one variable: e02aec (nag_1d_cheb_eval)
in one variable: e02akc (nag_1d_cheb_eval2)
in two variables: e02cbc (nag_2d_cheb_eval)
of rational functions: e02rbc (nag_1d_pade_eval)
Integration,
of cubic splines (definite integral): e02bdc (nag_1d_spline_intg)
of polynomials: e02ajc (nag_1d_cheb_intg)
fit,
with general linear function: e02gac (nag_lone_fit)
Least-squares curve fit,
with cubic splines: e02bac (nag_1d_spline_fit_knots)
with polynomials,
arbitrary data points: e02adc (nag_1d_cheb_fit)
selected data points: e02afc (nag_1d_cheb_interp_fit)
with constraints: e02agc (nag_1d_cheb_fit_constr)
Least-squares surface fit,
with polynomials: e02cac (nag_2d_cheb_fit_lines)
Minimax space fit,
with general linear function: e02gcc (nag_linf_fit)
Padé approximants: e02rac (nag_1d_pade)
|
|
See Also
|
|
Baker G A (1975) Essentials of Padé Approximants Academic Press, New York
Cox M G and Hayes J G (1973) Curve fitting: a guide and suite of algorithms for the non-specialist user NPL Report NAC26 National Physical Laboratory
Hayes J G (ed.) (1970) Numerical Approximation to Functions and Data Athlone Press, London
Hayes J G (1974) Numerical methods for curve and surface fitting Bull. Inst. Math. Appl. 10 144–152
Hayes J G and Halliday J (1974) The least-squares fitting of cubic spline surfaces to general data sets J. Inst. Math. Appl. 14 89–103
NAG Toolbox Overview.
NAG Web Site.
|
|