numerically approximate the solution to a linear system - Maple Help

Home : Support : Online Help : Education : Student Package : Numerical Analysis : Visualization : Student/NumericalAnalysis/IterativeApproximate

Student[NumericalAnalysis][IterativeApproximate] - numerically approximate the solution to a linear system

 Calling Sequence IterativeApproximate(A, b, opts) IterativeApproximate(A, opts)

Parameters

 A - Matrix; a square $\mathrm{nxn}$ matrix or an augmented (A|b) $\mathrm{nxm}$ matrix, where $m=n+1$ b - (optional) Vector; a vector of length $n$ opts - equations of the form keyword=value, where keyword is one of distanceplotoptions, initialapprox, maxiterations, method, output, plotoptions, showsteps, stoppingcriterion, tolerance; the options for numerically approximating the solution to $\mathrm{Ax}=b$

Description

 • The IterativeApproximate command numerically approximates the solution to the linear system A.x=b, using one of these iterative methods: Gauss-Seidel, Jacobi, and successive over-relaxation.
 • It is possible to return both the approximation and the error at each iteration with this command; see the output and stoppingcriterion options under the Options section for more details.
 • It is also possible to view a column graph of the distances (errors) at each step, showing whether convergence is achieved.
 • When A and b are 3-dimensional, it is possible to obtain a plot tracing the path of the approximation sequence.
 • The entries of A and b must be expressions that can be evaluated to floating-point numbers.

Notes

 • The initialapprox, tolerance and maxiterations are all required keyword parameters; they must be given when the IterativeApproximate command is used.
 • If A is positive definite  or strictly diagonally dominant , then A is invertible, and so the system A.x = b has a unique solution. Use IsMatrixShape to check if a matrix has one of these properties.
 • If the matrix A is strictly diagonally dominant , both the Jacobi and Gauss-Seidel methods produce a sequence of approximation vectors converging to the solution, for any initial approximation vector.
 • If A is positive definite , the Gauss-Seidel method produces a sequence converging to the solution, for any initial approximation vector; the same holds for the successive over-relaxation method, provided that the relaxation factor w is strictly between 0 and 2.
 • In general, if A gives rise to an iteration matrix T such that the spectral radius of T is strictly less than 1, the resulting sequence is guaranteed to converge to a solution, for any initial approximation vector.
 • This procedure operates numerically; that is, if the inputs are not already numeric, they are first evaluated to floating-point quantities before computations proceed. The outputs will be numeric as well. Note that exact rationals are considered numeric and are preserved whenever possible throughout the computation; therefore, one must specify floating-point inputs instead of exact rationals to obtain floating-point outputs.

Examples

 > $\mathrm{with}\left(\mathrm{Student}[\mathrm{NumericalAnalysis}]\right):$
 > $A:=\mathrm{Matrix}\left(\left[\left[10,-1,2,0\right],\left[-1,11,-1,3\right],\left[2,-1,10,-1\right],\left[0,3,-1,8\right]\right]\right):$
 > $b:=\mathrm{Vector}\left(\left[6,25,-11,15\right]\right):$

View the approximate solution using the Jacobi method.

 > $\mathrm{IterativeApproximate}\left(A,b,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{tolerance}={10}^{-3},\mathrm{maxiterations}=20,\mathrm{stoppingcriterion}=\mathrm{relative}\left(\mathrm{∞}\right),\mathrm{method}=\mathrm{jacobi}\right)$
 $\left[\begin{array}{c}{0.9996741452}\\ {2.000447672}\\ {-}{1.000369158}\\ {1.000619190}\end{array}\right]$ (1)

View the approximate solution with the error at each iteration.

 > $\mathrm{IterativeApproximate}\left(A,b,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{tolerance}={10}^{-3},\mathrm{maxiterations}=20,\mathrm{stoppingcriterion}=\mathrm{relative}\left(\mathrm{∞}\right),\mathrm{method}=\mathrm{jacobi},\mathrm{output}=\left[\mathrm{approximates},\mathrm{distances}\right]\right)$
 $\left[\left[\begin{array}{c}{0.}\\ {0.}\\ {0.}\\ {0.}\end{array}\right]{,}\left[\begin{array}{c}{0.6000000000}\\ {2.272727273}\\ {-}{1.100000000}\\ {1.875000000}\end{array}\right]{,}\left[\begin{array}{c}{1.047272727}\\ {1.715909091}\\ {-}{0.8052272727}\\ {0.8852272726}\end{array}\right]{,}\left[\begin{array}{c}{0.9326363636}\\ {2.053305785}\\ {-}{1.049340909}\\ {1.130880682}\end{array}\right]{,}\left[\begin{array}{c}{1.015198760}\\ {1.953695765}\\ {-}{0.9681086260}\\ {0.9738427170}\end{array}\right]{,}\left[\begin{array}{c}{0.9889913017}\\ {2.011414725}\\ {-}{1.010285904}\\ {1.021350510}\end{array}\right]{,}\left[\begin{array}{c}{1.003198653}\\ {1.992241261}\\ {-}{0.9945217368}\\ {0.9944337401}\end{array}\right]{,}\left[\begin{array}{c}{0.9981284735}\\ {2.002306882}\\ {-}{1.001972230}\\ {1.003594310}\end{array}\right]{,}\left[\begin{array}{c}{1.000625134}\\ {1.998670301}\\ {-}{0.9990355755}\\ {0.9988883905}\end{array}\right]{,}\left[\begin{array}{c}{0.9996741452}\\ {2.000447672}\\ {-}{1.000369158}\\ {1.000619190}\end{array}\right]\right]{,}\left[{1.000000000}{,}{0.5768211921}{,}{0.1643187763}{,}{0.08037994851}{,}{0.02869570322}{,}{0.01351079833}{,}{0.005027012138}{,}{0.002354525155}{,}{0.0008884866247}\right]$ (2)

View the approximate solution with the error at each iteration as a column graph.

 > $\mathrm{IterativeApproximate}\left(A,b,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.,0.,0.\right]\right),\mathrm{tolerance}={10}^{-3},\mathrm{maxiterations}=20,\mathrm{stoppingcriterion}=\mathrm{relative}\left(\mathrm{∞}\right),\mathrm{method}=\mathrm{jacobi},\mathrm{output}=\mathrm{plotdistance}\right)$

The linear system may be input as an augmented matrix

 > $A:=\mathrm{Matrix}\left(\left[\left[3.32,1.43,4.01\right],\left[2.03,5.93,2.03\right]\right]\right)$
 ${A}{:=}\left[\begin{array}{ccc}{3.32}& {1.43}& {4.01}\\ {2.03}& {5.93}& {2.03}\end{array}\right]$ (3)
 > $\mathrm{IterativeApproximate}\left(A,\mathrm{initialapprox}=\mathrm{Vector}\left(\left[0.,0.\right]\right),\mathrm{tolerance}={10}^{-5},\mathrm{maxiterations}=20,\mathrm{method}=\mathrm{SOR}\left(1.25\right)\right)$
 $\left[\begin{array}{c}{1.243771952}\\ {-}{0.08345160693}\end{array}\right]$ (4)