 Student[NumericalAnalysis] - Maple Programming Help

Home : Support : Online Help : Education : Student Packages : Numerical Analysis : Visualization : Student/NumericalAnalysis/IterativeApproximate

Student[NumericalAnalysis]

 IterativeApproximate
 numerically approximate the solution to a linear system

 Calling Sequence IterativeApproximate(A, b, initialapprox=value, tolerance=value, maxiterations=value, opts) IterativeApproximate(A, initialapprox=value, tolerance=value, maxiterations=value, opts)

Parameters

 A - Matrix; a square $\mathrm{nxn}$ matrix or an augmented (A|b) $\mathrm{nxm}$ matrix, where $m=n+1$ b - (optional) Vector; a vector of length $n$ opts - equations of the form keyword=value, where keyword is one of distanceplotoptions, initialapprox, maxiterations, method, output, plotoptions, showsteps, stoppingcriterion, tolerance; the options for numerically approximating the solution to $\mathrm{Ax}=b$

Options

 • distanceplotoptions = [list]
 The plot options for the  column graph of distances when output=plotdistance.
 • initialapprox = Vector
 Initial approximation vector with which to begin the iteration; this vector must be numeric. This is a required keyword parameter.
 • maxiterations = posint
 The maximum number of iterations to perform while approximating the solution to A.x=b.  If the maximum number of iterations is reached and the solution is not within the specified tolerance, a plot of distances can still be returned. This is a required keyword parameter.
 • method = gaussseidel, jacobi, SOR(numeric)
 The method to be used when approximating the solution to A.x=b. See the Notes section below for some of the sufficient conditions for convergence.
 – gaussseidel = Gauss-Seidel method
 – jacobi = Jacobi method
 – SOR(numeric) = successive over-relaxation (SOR) method
 Note that the SOR method is specified by the symbol SOR followed by the relaxation factor in parentheses.  The relaxation factor must be strictly between 0 and 2; otherwise, the generated sequence will diverge. For the purpose of demonstrating this divergence, however, values of the relaxation factor outside this range are still accepted by this procedure.
 By default, method=gaussseidel.
 • output = solution, approximates, distances, plotdistance, plotsolution, or list
 The return value of the function.  The default is solution. For more than one output, specify a list of the output in the order desired.
 – output=solution returns the final approximation of x.
 – output=approximates returns the approximation at each iteration in a list.
 – output=distances returns the error at each iteration in a list.
 – output=plotdistance returns a column graph of the errors at each iteration.
 – output=plotsolution returns a 3-D plot of the path of the approximations of x. This output is only available when A and b are 3-dimensional.
 • plotoptions=list
 The plot options for the 3-D plot when output=plotsolution.
 • showsteps = true or false
 Whether to print helpful messages in the interface as the IterativeApproximate command executes.
 • stoppingcriterion = function
 The stopping criterion for the approximation of x in the form stoppingcriterion=distance(norm), where distance is either relative or absolute and norm is one of: posint, infinity ($\mathrm{\infty }$), or Euclidean. By default, stoppingcriterion=relative(infinity).
 The stopping criterion for the approximation of x in the form stoppingcriterion=distance(norm), where distance is either relative or absolute and norm is one of: posint, infinity, or Euclidean. By default, stoppingcriterion=relative(infinity).
 • tolerance = positive
 The tolerance of the approximation. This is a required keyword parameter.

Description

 • The IterativeApproximate command numerically approximates the solution to the linear system A.x=b, using one of these iterative methods: Gauss-Seidel, Jacobi, and successive over-relaxation.
 • It is possible to return both the approximation and the error at each iteration with this command; see the output and stoppingcriterion options under the Options section for more details.
 • It is also possible to view a column graph of the distances (errors) at each step, showing whether convergence is achieved.
 • When A and b are 3-dimensional, it is possible to obtain a plot tracing the path of the approximation sequence.
 • The entries of A and b must be expressions that can be evaluated to floating-point numbers.

Notes

 • The initialapprox, tolerance and maxiterations are all required keyword parameters; they must be given when the IterativeApproximate command is used.
 • If A is positive definite or strictly diagonally dominant, then A is invertible, and so the system A.x = b has a unique solution. Use IsMatrixShape to check if a matrix has one of these properties.
 • If the matrix A is strictly diagonally dominant, both the Jacobi and Gauss-Seidel methods produce a sequence of approximation vectors converging to the solution, for any initial approximation vector.
 • If A is positive definite, the Gauss-Seidel method produces a sequence converging to the solution, for any initial approximation vector; the same holds for the successive over-relaxation method, provided that the relaxation factor w is strictly between 0 and 2.
 • In general, if A gives rise to an iteration matrix T such that the spectral radius of T is strictly less than 1, the resulting sequence is guaranteed to converge to a solution, for any initial approximation vector.
 • This procedure operates numerically; that is, if the inputs are not already numeric, they are first evaluated to floating-point quantities before computations proceed. The outputs will be numeric as well. Note that exact rationals are considered numeric and are preserved whenever possible throughout the computation; therefore, one must specify floating-point inputs instead of exact rationals to obtain floating-point outputs.

Examples

 > $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{NumericalAnalysis}\right]\right):$
 > $A≔\mathrm{Matrix}\left(\left[\left[10,-1,2,0\right],\left[-1,11,-1,3\right],\left[2,-1,10,-1\right],\left[0,3,-1,8\right]\right]\right):$
 > $b≔\mathrm{Vector}\left(\left[6,25,-11,15\right]\right):$

View the approximate solution using the Jacobi method.

 > $\mathrm{IterativeApproximate}\left(A,b,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{tolerance}={10}^{-3},\mathrm{maxiterations}=20,\mathrm{stoppingcriterion}=\mathrm{relative}\left(\mathrm{\infty }\right),\mathrm{method}=\mathrm{jacobi}\right)$
 $\left[\begin{array}{c}{0.9996741452}\\ {2.000447672}\\ {-1.000369158}\\ {1.000619190}\end{array}\right]$ (1)

View the approximate solution with the error at each iteration.

 > $\mathrm{IterativeApproximate}\left(A,b,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{tolerance}={10}^{-3},\mathrm{maxiterations}=20,\mathrm{stoppingcriterion}=\mathrm{relative}\left(\mathrm{\infty }\right),\mathrm{method}=\mathrm{jacobi},\mathrm{output}=\left[\mathrm{approximates},\mathrm{distances}\right]\right)$
 $\left[\left[\begin{array}{c}{0.}\\ {0.}\\ {0.}\\ {0.}\end{array}\right]{,}\left[\begin{array}{c}{0.6000000000}\\ {2.272727273}\\ {-1.100000000}\\ {1.875000000}\end{array}\right]{,}\left[\begin{array}{c}{1.047272727}\\ {1.715909091}\\ {-0.8052272727}\\ {0.8852272726}\end{array}\right]{,}\left[\begin{array}{c}{0.9326363636}\\ {2.053305785}\\ {-1.049340909}\\ {1.130880682}\end{array}\right]{,}\left[\begin{array}{c}{1.015198760}\\ {1.953695765}\\ {-0.9681086260}\\ {0.9738427170}\end{array}\right]{,}\left[\begin{array}{c}{0.9889913017}\\ {2.011414725}\\ {-1.010285904}\\ {1.021350510}\end{array}\right]{,}\left[\begin{array}{c}{1.003198653}\\ {1.992241261}\\ {-0.9945217368}\\ {0.9944337401}\end{array}\right]{,}\left[\begin{array}{c}{0.9981284735}\\ {2.002306882}\\ {-1.001972230}\\ {1.003594310}\end{array}\right]{,}\left[\begin{array}{c}{1.000625134}\\ {1.998670301}\\ {-0.9990355755}\\ {0.9988883905}\end{array}\right]{,}\left[\begin{array}{c}{0.9996741452}\\ {2.000447672}\\ {-1.000369158}\\ {1.000619190}\end{array}\right]\right]{,}\left[{1.000000000}{,}{0.5768211921}{,}{0.1643187763}{,}{0.08037994851}{,}{0.02869570322}{,}{0.01351079833}{,}{0.005027012138}{,}{0.002354525155}{,}{0.0008884866247}\right]$ (2)

View the approximate solution with the error at each iteration as a column graph.

 > $\mathrm{IterativeApproximate}\left(A,b,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{tolerance}={10}^{-3},\mathrm{maxiterations}=20,\mathrm{stoppingcriterion}=\mathrm{relative}\left(\mathrm{\infty }\right),\mathrm{method}=\mathrm{jacobi},\mathrm{output}=\mathrm{plotdistance}\right)$ The linear system may be input as an augmented matrix

 > $A≔\mathrm{Matrix}\left(\left[\left[3.32,1.43,4.01\right],\left[2.03,5.93,2.03\right]\right]\right)$
 ${A}{≔}\left[\begin{array}{ccc}{3.32}& {1.43}& {4.01}\\ {2.03}& {5.93}& {2.03}\end{array}\right]$ (3)
 > $\mathrm{IterativeApproximate}\left(A,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.\right]\right),\mathrm{tolerance}={10}^{-5},\mathrm{maxiterations}=20,\mathrm{method}=\mathrm{SOR}\left(1.25\right)\right)$
 $\left[\begin{array}{c}{1.243771952}\\ {-0.08345160693}\end{array}\right]$ (4)