fit(deprecated)/leastsqu - Help

stats[fit, leastsquare]

Fit a curve to data using the least square method

 Calling Sequence stats[fit, leastsquare[vars, eqn, parms]]( data) fit[leastsquare[vars, eqn, parms]]( data)

Parameters

 data - list of statistical lists vars - list of variables, corresponding, in order, to the lists in data eqn - The equation to fit (optional, default = a linear equation with the last variable in vars as dependent variable and with a constant term). parms - Set of parameters that will be replaced (optional, default = indets(eqn) minus op(vars)).

Description

 • Important: The stats package has been deprecated. Use the superseding package Statistics instead.
 • The function leastsquare of the subpackage stats[fit, ...] fits a curve to the given data using the method of least squares.
 • The equation to fit must be linear in the unknown parameters. The equation itself need not be linear. For example, $y=a{x}^{2}+bx+c$ with the parameters a, b, c is accepted. Note that some equations which have their parameters appearing nonlinearly can be transformed to linear ones. $y=a{ⅇ}^{bx}$ can be transformed into $\mathrm{ln}\left(y\right)=A+bx,$ where $A=\mathrm{log}\left(a\right)$.  The leastsquares command does not apply this transformation automatically, but the Statistics[ExponentialFit] command can be used instead.
 • Missing data  and ranges cannot be handled.
 • Weighted data are handled in the following fashion. The weight associated with the dependent variable is the weight given to the corresponding point. The weight specifications corresponding to the independent variables are ignored.
 • Data fitting routines are also available in the Statistics package.  For more information, see the Statistics[Regression] help page.
 • The command with(stats[fit],leastsquare) allows the use of the abbreviated form of this command.

Examples

Important: The stats package has been deprecated. Use the superseding package Statistics instead.

 > $\mathrm{with}\left(\mathrm{stats}\right):$
 > $\mathrm{fit}[\mathrm{leastsquare}[\left[x,y,z\right]]]\left(\left[\left[1,2,3,5\right],\left[2,4,6,8\right],\left[3,5,7,10\right]\right]\right)$
 ${z}{=}{x}{+}\frac{{1}}{{2}}{}{y}{+}{1}$ (1)

Here's an example using Weight

 > $\mathrm{fit}[\mathrm{leastsquare}[\left[x,y,z\right]]]\left(\left[\left[1,2,3,5,5,5\right],\left[2,4,6,8,8,8\right],\left[3,5,7,10,15,15\right]\right]\right)$
 ${z}{=}\frac{{13}}{{3}}{}{x}{-}\frac{{7}}{{6}}{}{y}{+}{1}$ (2)
 > $\mathrm{fit}[\mathrm{leastsquare}[\left[x,y,z\right]]]\left(\left[\left[1,2,3,5,5\right],\left[2,4,6,8,8\right],\left[3,5,7,10,\mathrm{Weight}\left(15,2\right)\right]\right]\right)$
 ${z}{=}\frac{{13}}{{3}}{}{x}{-}\frac{{7}}{{6}}{}{y}{+}{1}$ (3)

 > $\mathrm{Xvalues}≔\left[1,2,3,4\right]$
 ${\mathrm{Xvalues}}{:=}\left[{1}{,}{2}{,}{3}{,}{4}\right]$ (4)
 > $\mathrm{Yvalues}≔\left[0,6,14,24\right]$
 ${\mathrm{Yvalues}}{:=}\left[{0}{,}{6}{,}{14}{,}{24}\right]$ (5)
 > $\mathrm{eq_fit}≔\mathrm{fit}[\mathrm{leastsquare}[\left[x,y\right],y=a{x}^{2}+bx+c,\left\{a,b,c\right\}]]\left(\left[\mathrm{Xvalues},\mathrm{Yvalues}\right]\right)$
 ${\mathrm{eq_fit}}{:=}{y}{=}{{x}}^{{2}}{+}{3}{}{x}{-}{4}$ (6)

The {a,b,c} parameter is optional in this case, since there are no extra Maple variables in the equation (compare with  y=a*x^2+b*x+c+Pi*x^3, where Pi is definitely not a parameter.)

 > $\mathrm{eq_fit}≔\mathrm{fit}[\mathrm{leastsquare}[\left[x,y\right],y=a{x}^{2}+bx+c]]\left(\left[\mathrm{Xvalues},\mathrm{Yvalues}\right]\right)$
 ${\mathrm{eq_fit}}{:=}{y}{=}{{x}}^{{2}}{+}{3}{}{x}{-}{4}$ (7)

Transform this into a procedure

 > $\mathrm{eq_function}≔\mathrm{unapply}\left(\mathrm{rhs}\left(\mathrm{eq_fit}\right),x\right)$
 ${\mathrm{eq_function}}{:=}{x}{→}{{x}}^{{2}}{+}{3}{}{x}{-}{4}$ (8)

Then give the predicted values  (we could have used map() in this case, since the data does not involve classes or weights)

 > $\mathrm{Yvalues_predicted}≔\mathrm{transform}[\mathrm{apply}[\mathrm{eq_function}]]\left(\mathrm{Xvalues}\right)$
 ${\mathrm{Yvalues_predicted}}{:=}\left[{0}{,}{6}{,}{14}{,}{24}\right]$ (9)

Find the residuals:

 > $\mathrm{Residuals}≔\mathrm{transform}[\mathrm{multiapply}[\left(x,y\right)→x-y]]\left(\left[\mathrm{Yvalues},\mathrm{Yvalues_predicted}\right]\right)$
 ${\mathrm{Residuals}}{:=}\left[{0}{,}{0}{,}{0}{,}{0}\right]$ (10)

The residuals are all zero since all the points fall on the quadratic.