
Calling Sequence


Lowess(XY, options)
Lowess(X, Y, options)
L := Lowess(...)
L(x1, x2, ..., xN)
L(M)


Parameters


XY



Matrix, Array, DataFrame, or listlist of data convertible to numeric. If XY is a Matrix, DataFrame, or Array, it must have at least two columns. If XY is a listlist, then each inner list is considered one row, with the $k$th of each inner list being the elements of the $k$th column, and the same requirement of at least two columns applies. Each row of XY is interpreted as one data point. If XY has $m$ columns, then the first $m1$ columns contain the values of the $m1$ independent variables. The last column contains the corresponding values of the dependent variable.

X



Vector, list, Matrix, Array, DataSeries, or listlist of data convertible to numeric with the number of rows equal to the length of Y. The columns are interpreted as the values of the independent variables.

Y



Vector, list, DataSeries, or Array of data convertible to numeric with length equal to the number of rows in X. The values in Y are the corresponding values of the dependent variable.

options



(optional) equation(s) of the form option=value where option is one of fitorder, bandwidth, or iters

x1, x2, ..., xN



evaluates L at (x1, x2, ..., xN), where N is equal to $m1$ or the number of columns in X.

M



a matrix with number of columns equal to the number of columns in X or $m1$. Returns a Vector where the $i$th element is L evaluated with the $i$th row of M as arguments





Options



The degree of the polynomial used in each local regression. The default value is $1$.


The proportion of the input data points used in each local regression. The default value depends on fitorder and the number of input data points.


The number of iterations when smoothing data of one independent variable. Each iteration makes the data smoother by eliminating outliers, thus making the computation more robust. This option has no effect when the data has more than one independent variable. The default is $2$.



Description


•

The Lowess command creates a function whose values represent the input data smoothed with the lowess algorithm.

•

Suppose the input data set $\mathrm{XY}$ is of $m$ independent variables and has $n$ data points, the lowess smoothed value at $x\u2254\left({x}_{1}\,{x}_{2}\,\dots \,{x}_{m}\right)$ is computed as follows.

–

Take the $n\cdot \mathrm{bandwidth}$ points in $\mathrm{XY}$ that are closest to $x$.

–

Fit a polynomial $P$ of $m$ variables and degree $\mathrm{fitorder}$ to the points using weighted linear least squares, where the weight for a point $w$ is computed by applying the tricube weight function to the distance between $x$ and $w$.

–

Evaluate $P\left(x\right)$.

•

Running one or more iterations, as specified by the iters option, will produce a set of weights to reduce the influence of outliers (that is, make the computation more robust). At each iteration, the weight of a point depends on the residual of the Lowess curve at that point in the previous iteration. These weights are combined with the weights given by the distance, as described previously.

•

L will return unevaluated if the arguments are nonconvertible to numerics. But if the first and only argument is a Matrix with number of columns equal to the number of parameters of L, a Vector will be returned where the $i$th element is the L applied with the $i$th row of the Matrix as arguments.



Examples


Create a data sample and apply to it some error.
>

X := Sample(Uniform(2, 2), 200):

>

Y := Sample(Uniform(2, 2), 200):

>

Zerror := Sample(Normal(0, 0.1), 200):

>

Z := X *~ map(exp, X^~2  Y^~2) + Zerror:

>

XYZ := Matrix([[X],[Y],[Z]],datatype=float[8])^%T;

${{\mathrm{\_rtable}}}_{{18446884523970278094}}$
 (1) 
Create the function whose graph is the smoothed surface.
>

L := Lowess(XYZ, fitorder = 2, bandwidth = 0.3):

Plot the data sample, smoothed surface, and the region between the plane $z\=0.4$ and the surface for $\mathrm{1.5}\le x\le \mathrm{0.5}$ and $\mathrm{1}\le y\le 1$.
>

P := ScatterPlot3D(XYZ):

>

Q := plot3d(L, 2..2, 2..2, grid=[25,25]):

>

R := plots:shadebetween(L(x,y), 0.4, x = 1.5..0.5, y = 1..1, showboundary = false, negativeonly):

>

plots:display(P, Q, R, orientation=[100,70,0], lightmodel=none);

Find the volume of the shaded region.
>

int(0.4  L(x,y), x = 1.5..0.5, y = 1..1, numeric, epsilon = 0.01, method = _CubaSuave);

For a two dimensional example we will create another data sample.
>

X := Sample(Uniform(0, Pi), 200);

${{\mathrm{\_rtable}}}_{{18446884523971336182}}$
 (3) 
>

Yerror := Sample(Normal(0, 0.1), 200);

${{\mathrm{\_rtable}}}_{{18446884523608508766}}$
 (4) 
>

Y := map(sin, X) + Yerror;

${{\mathrm{\_rtable}}}_{{18446884523945524030}}$
 (5) 
Create the function whose graph is the smoothed curve.
>

L := CurveFitting:Lowess(X, Y, fitorder = 1, bandwidth = 0.3):

Plot the data sample, smoothed curve, and the region between the $x$axis and the curve for $\frac{\mathrm{\pi}}{8}\le x\le 3\frac{\mathrm{\pi}}{8}$.
>

P := ScatterPlot(X, Y);

>

Q := plot(L(x), x = 0..Pi);

>

R := plots:shadebetween(L(x), 0, x = Pi/8..3*Pi/8, showboundary = false, positiveonly);

>

plots:display(P, Q, R);

Find the area of the shaded region.
>

int(L, Pi/8..3*Pi/8, numeric, epsilon = 0.01);

${0.526726498845341}$
 (6) 
And find the maximum.
>

Optimization:Maximize(L, map(unapply, {x, xPi}, x), optimalitytolerance = 0.001);

$\left[{0.991236282390020151}{\,}\left[\begin{array}{c}1.5020162280669873\end{array}\right]\right]$
 (7) 


Compatibility


•

The Statistics[Lowess] command was introduced in Maple 2015.



