Regression Commands - Maple Help

Online Help

All Products    Maple    MapleSim

Home : Support : Online Help : Statistics : Statistics Package : Regression : Statistics/Regression

Regression Commands


The Statistics package provides various commands for fitting linear and nonlinear models to data points and performing regression analysis.   The fitting algorithms are based on least-squares methods, which minimize the sum of the residuals squared.

Available Commands


fit an exponential function to data


fit a model function to data


fit a linear model function to data


fit a logarithmic function to data


produce lowess smoothed functions


fit a nonlinear model function to data


generate a one-way ANOVA table


fit a polynomial to data


fit a power function to data


fit a predictive linear model function to data


robust linear regression

Linear Fitting


A number of commands are available for fitting a model function that is linear in the model parameters to given data.  For example, the model function bt2+at is linear in the parameters a and b, though it is nonlinear in the independent variable t.


The LinearFit command is available for multiple general linear regression.  For certain classes of model functions involving only one independent variable, the PolynomialFit, LogarithmicFit, PowerFit, and ExponentialFit commands are available. The PowerFit and ExponentialFit commands use a transformed model function that is linear in the parameters.

Nonlinear Fitting


The NonlinearFit command is available for nonlinear fitting.  An example model function is ax+ⅇby where a and b are the parameters, and x and y are the independent variables.


This command relies on local nonlinear optimization solvers available in the Optimization package.  The LSSolve and NLPSolve commands in that package can also be used directly for least-squares and general nonlinear minimization.

Other Commands


The general Fit command allows you to provide either a linear or nonlinear model function.  It then determines the appropriate regression solver to use.


The OneWayANOVA command generates the standard ANOVA table for one-way classification, given two or more groups of observations.

Using the Regression Commands


Various options can be provided to the regression commands. For example, the weights option allows you to specify weights for the data points and the output option allows you to control the format of the results.  The options available for each command are described briefly in the command's help page and in greater detail in the Statistics/Regression/Options help page.


The format of the solutions returned by the regression commands is described in the Statistics/Regression/Solution help page.


Most of the regression commands use methods implemented in a built-in library provided by the Numerical Algorithms Group (NAG).  The underlying computation is done in floating-point.  Either hardware or software (arbitrary precision) floating-point computation can be specified.


The model function and data sets may be provided in different ways.  Full details are available in the Statistics/Regression/InputForms help page. The regression routines work primarily with Vectors and Matrices.  In most cases, lists (both flat and nested) and Arrays are also accepted and automatically converted to Vectors or Matrices.  Consequently, all output, including error messages, uses these data types.



Define Vectors X and Y, containing values of an independent variable x and a dependent variable y.



Find the values of a and b that minimize the least-squares error when the model function at+bⅇx is used.




Fit a polynomial of degree 3 through this data.




Use the output option to see the residual sum of squares and the standard errors.




Fit the model function ax+ⅇbx, which is nonlinear in the parameters.




Consider now an experiment where quantities x, y, and z are quantities influencing a quantity w according to an approximate relationship


with unknown parameters a, b, and c. Six data points are given by the following matrix, with respective columns for x, y, z, and w.




We take an initial guess that the first term will be approximately quadratic in x, that b will be approximately 1, and for c we don't even know whether it's going to be positive or negative, so we guess c=0. We compute both the model function and the residuals. Also, we select more verbose operation by setting infolevel.



In NonlinearFit (algebraic form)



We note that Maple selected the nonlinear fitting method. Furthermore, the exponent on x is only about 1.14, and the other guesses were not very good either. However, this problem is conditioned well enough that Maple finds a good fit anyway.

Now suppose that the relationship that is used to model the data is altered as follows:


We adapt the calling sequence very slightly:


In Fit
In LinearFit (container form)




This time, Maple could select the linear fitting method, because the expression is linear in the parameters. The initial values for the parameters are not used.

Finally, consider a situation where an ordinary differential equation leads to results that need to be fitted. The system is given by


where a and b are parameters that we want to find, z is a variable that we can vary between experiments, and xt is a quantity that we can measure at t=1. We perform 10 experiments at z=0.1,0.2,...,1.0, and the results are as follows.







We now need to set up a procedure that NonlinearFit can call to obtain the value for a given input value z and a given pair of parameters a and b. We do this using dsolve/numeric.





ODE_Solution:=procx_rkf45...end proc


We now have a procedure ODE_Solution that can compute the correct value, but we need to write a wrapper that has the form that NonlinearFit expects. We first need to call ODE_Solution once to set the parameters, then another time to obtain the value of xt at t=1, and then return this value (for more information about how this works, see dsolve/numeric). By hand, we can do this as follows:











Error, (in ODE_Solution) cannot evaluate the solution past the initial point, problem may be complex, initially singular or improperly set up

Note that for some settings of the parameters, we cannot obtain a solution. We need to take care of this in the procedure we create (which we call f), by returning a value that is very far from all output points, leading to a very bad fit for these erroneous parameter values.

f := proc(zValue, aValue, bValue) global ODE_Solution, a, b, z, x, t; ODE_Solution('parameters' = [a = aValue, b = bValue, z = zValue]); try return eval(x(t), ODE_Solution(1)); catch: return 100; end try; end proc;

f:=proczValue,aValue,bValueglobalODE_Solution,a,b,z,x,t;ODE_Solution'parameters'=a=aValue,b=bValue,z=zValue;tryreturnevalxt,ODE_Solution1catch:return100end tryend proc





We need to provide an initial estimate for the parameter values, because the fitting procedure is only performed in a local sense. We go with the values that provided a solution above: a=1,b=0.5.




See Also

CurveFitting, Statistics, Statistics/Computation, Statistics/MaximumLikelihoodEstimate, Statistics/Regression/Options, Statistics/Regression/Solution, TimeSeriesAnalysis



Draper, Norman R., and Smith, Harry. Applied Regression Analysis. 3rd ed. New York: Wiley, 1998.

Download Help Document

Was this information helpful?

Please add your Comment (Optional)
E-mail Address (Optional)
What is ? This question helps us to combat spam