Statistics and Data Analysis - Maple Help

Home : Support : Online Help : System : Information : Updates : Maple 2019 : Statistics and Data Analysis

 Statistics and Data Analysis

Least Trimmed Squares Regression

The LeastTrimmedSquares command computes least trimmed squares regression for some data.

 > $\mathrm{with}\left(\mathrm{Statistics}\right):$

In this example, we have 1000 data points. There is a single independent variable, x, with values uniformly distributed between 0 and 10. The dependent variable is a linear function of the independent variable plus some additive noise, y=5 x + 10 + noise, where the noise is from a probability distribution known to have severe outliers - the Cauchy distribution, with location parameter 0 and scale parameter 5.

 > $x≔\mathrm{Sample}\left(\mathrm{Uniform}\left(0,10\right),1000\right):$
 > $\mathrm{noise}≔\mathrm{Sample}\left(\mathrm{Cauchy}\left(0,1\right),1000\right):$
 >

Here we see all data points:

 >

Linear least squares regression will be severely affected by the outliers.

 >
 ${\mathrm{ls_regression_result}}{≔}{3.44970682383807}{}{X}{+}{10.6816568681413}$ (1.1)
 >
 ${\mathrm{ls_deviation_from_model}}{≔}{2.86806501793850}$ (1.2)

Least trimmed squares regression gets much closer to the true line without noise.

 >
 ${\mathrm{lts_regression_result}}{≔}{5.03537530551575}{}{X}{+}{9.82475419561272}$ (1.3)
 >
 ${\mathrm{lts_deviation_from_model}}{≔}{0.0319625041956780}$ (1.4)

The result is even better if we include 900 out of the 1000 points, instead of the default of a little over 500.

 >
 ${\mathrm{lts_900_regression_result}}{≔}{5.00862730339998}{}{X}{+}{10.0156318668695}$ (1.5)
 >
 ${\mathrm{lts_900_deviation_from_model}}{≔}{0.000318785625780919}$ (1.6)

The other robust regression method, implemented in the RepeatedMedianEstimator procedure, also gets a good result.

 >
 ${\mathrm{rme_regression_result}}{≔}{10.0306661686300}{+}{5.00564125476873}{}{X}$ (1.7)
 >
 ${\mathrm{rme_deviation_from_model}}{≔}{0.000972237653807886}$ (1.8)

In order to visualize these results, we show the same point plot as before, including the four regression lines. The three regression lines from robust methods cannot be distinguished, but the least squares method is clearly off. We zoom in on the vertical range that includes most points.

 >

Correlogram

The Correlogram command computes autocorrelations of a data set and displays the result as a column plot with dashed lines indicating the lower and upper 95% confidence bands for the normal distribution N(0,1/L), where L is the size of the sample 'X', and a caption reporting how many of the displayed columns lie outside of the bands of plus or minus 2, 3, and 4 standard deviations respectively. AutoCorrelationPlot is an alias for the Correlogram command.

 >

Detrend

The Detrend command removes any trend from a set of data.

 > $\mathrm{restart}:$
 > $\mathrm{with}\left(\mathrm{Statistics}\right):$

For example, specify some data:

 > $\mathrm{data}≔\mathrm{Matrix}\left(\left[\left[0,1.8\right],\left[1,0.7\right],\left[2.5,2.8\right],\left[4,4.2\right],\left[6.2,3\right]\right]\right)$
 $\left[\begin{array}{cc}0& 1.8\\ 1& 0.7\\ 2.5& 2.8\\ 4& 4.2\\ 6.2& 3\end{array}\right]$ (3.1)

Fit a linear model to the data:

 > $\mathrm{lm}≔\mathrm{LinearFit}\left(a+bt,\mathrm{data},t\right)$
 ${\mathrm{lm}}{≔}{1.49598376946009}{+}{0.366429281218947}{}{t}$ (3.2)

It can be observed that from the plot of the data and the linear model that there is some upward trend. The Detrend command removes any trend from the data.

 > $\mathrm{detrend_data}≔\mathrm{Detrend}\left(\mathrm{data}\right)$
 $\left[\begin{array}{c}0.30401623053991367\\ -1.162413050679033\\ 0.38794302749254683\\ 1.2382991056641273\\ -0.7678453130175553\end{array}\right]$ (3.3)

This can be observed in the following plot:

 >
 >

Detrend has also been added as an option to several routines in SignalProcessing including SignalPlot, Periodogram, and Spectrogram.

Difference

The Difference command computes lagged differences between elements in a data set.

 > $\mathrm{with}\left(\mathrm{Statistics}\right):$

Define some data:

 > $x≔⟨\mathrm{seq}\left({i}^{2},i=1..10\right)⟩$
 $\left[\begin{array}{r}1\\ 4\\ 9\\ 16\\ 25\\ 36\\ 49\\ 64\\ 81\\ 100\end{array}\right]$ (4.1)
 > $\mathrm{Difference}\left(x\right)$
 $\left[\begin{array}{r}3\\ 5\\ 7\\ 9\\ 11\\ 13\\ 15\\ 17\\ 19\end{array}\right]$ (4.2)

 ${{\mathrm{_rtable}}}_{{18446883718659727478}}$ (5.1.1)