LinearAlgebra - Maple Programming Help

Home : Support : Online Help : Mathematics : Linear Algebra : LinearAlgebra Package : Standard : LinearAlgebra/MatrixInverse

LinearAlgebra

 MatrixInverse
 compute the inverse of a square Matrix or the Moore-Penrose pseudo-inverse of a Matrix

 Calling Sequence MatrixInverse(A, m, mopts, c, out, options)

Parameters

 A - Matrix or list m - (optional) equation of the form method = name where name is one of 'LU', 'Cholesky', 'subs', 'integer', 'univar', 'polynom', 'complex', 'rational', 'pseudo', or 'none'; method used to factorize the inverse of A mopts - (optional) equation of the form methodoptions=list where the list contains options for specific methods c - (optional) equation of the form conjugate=true or false; specifies whether to use the Hermitian transpose in the case of prefactored input from a Cholesky decomposition out - (optional) equation of the form output=obj where obj is 'inverse' or 'proviso' or a list containing one or more of these names; selects the result objects to compute options - (optional); constructor options for the result object

Description

 • The MatrixInverse(A) function, where A is a nonsingular square Matrix, returns the Matrix inverse ${A}^{\mathrm{-1}}$.  If A is recognized as a singular Matrix, an error message is returned.  If A is non-square, the Moore-Penrose pseudo-inverse is returned.
 • If A is a nonsingular $nxn$ Matrix, the inverse ${A}^{\mathrm{-1}}$ is computed such that $A·{A}^{\mathrm{-1}}=I$, where I is the $nxn$ identity Matrix.
 • If A is a non-square $mxn$ Matrix, or if the option method = pseudo is specified, then the Moore-Penrose pseudo-inverse X is computed such that the following identities hold:
 $A·X·A=A$
 $X·A·X=X$
 $\mathrm{HermitianTranspose}\left(A·X\right)=A·X$
 $\mathrm{HermitianTranspose}\left(X·A\right)=X·A$
 • If m is included in the calling sequence, then the specified method is used to compute the inverse (except for $1x1$, $2x2$ and $3x3$ Matrices where the calculation of the inverse is hard-coded for efficiency).
 The complex and rational methods augment the input Matrix with a copy of the identity Matrix and then convert the system to reduced row echelon form.
 The integer method calls Adjoint/integer and divides it by the determinant.
 The univar method uses an evaluation method to reduce the Matrix to a Matrix of integers, then uses Adjoint/integer, and then uses genpoly to convert back to univariate polynomials.
 The polynom method uses an implementation of fraction-free Gaussian elimination.
 The LU and Cholesky methods use the corresponding LUDecomposition method on the input Matrix (if not already prefactored) and then use forward and back substitution with a copy of the identity Matrix as a right-hand side.
 • The subs method indicates that the input is already triangular, so only the appropriate forward or back substitution is done.
 • If the first argument in the calling sequence is a list, then the elements of the list are taken as the Matrix factors of the Matrix A, due to some prefactorization. These factors are in the form of returned values from LUDecomposition. That is, the list items may be:
 – A list of a Vector and Matrix, $\left[\mathrm{ipiv},\mathrm{LU}\right]$, for an LU decomposition. The first member is taken as the pivot Vector and the second member as the superimposed unit-lower and upper triangular LU factors (these are the default values returned from LUDecomposition when the method=NAG option is given to that routine). The implied option is method=LU.
 – A list of 3 Matrices, $\left[P,L,U\right]$, the PLU decomposition of a Matrix (these are the values returned from LUDecomposition when the output=['P','L','U'] option is given to that routine). The inverse is ${U}^{\mathrm{-1}}·{L}^{\mathrm{-1}}·{P}^{\mathrm{-1}}$, each component of which can be inexpensively computed (compared to the cost of obtaining the factors). The implied option is method=LU.
 – A list of a single Matrix, [L], for a Cholesky decomposition (as obtained from the LUDecomposition routine when the method=Cholesky option is specified). The data is taken as the lower triangle of the Matrix. The implied option is method=Cholesky.
 Thus, prefactored data supplied for the Matrix argument uniquely determines the computation scheme. The method=name option, if specified with prefactored data, must be in agreement.
 • The methodoptions=list option can be used to specify options to a particular method.  This pertains only to the pseudo-inverse method. The method specific option specified in the list can be of the form tolerance = , where  is a positive number. This number, usually small, is used in the case of a floating-point Matrix as the tolerance for accepting a singular value as being effectively nonzero, for use in the pseudo-inverse computation.
 • The conjugate option specifies whether to use the Hermitian transpose when A is a list of a single Matrix from a symbolic Cholesky decomposition. The default is true. The conjugate=true condition can be abbreviated as conjugate.
 • The output=obj option can be used to specify the objects returned. The two possible outputs are inverse and proviso. The proviso is relevant only to the Moore-Penrose pseudo-inverse computation. In the floating-point case, it is the ratio of the largest singular value accepted as nonzero to the first singular value. In the exact symbolic case, it is the determinant of the Matrix. If this proviso is zero, in the exact case, or close to zero in the floating-point case, then the four identities given will not necessarily hold.
 • If you want to compute a single product ${A}^{\mathrm{-1}}·B$, for a Vector or Matrix $B$, it is typically more efficient to use the command $\mathrm{LinearAlgebra}\left[\mathrm{LinearSolve}\right]\left(A,B\right)$ than computing the inverse of $A$ and then the product. If you need to do this many times with the same floating point Matrix $A$, then you can obtain fast and accurate results by computing its LU decomposition or QR decomposition, and passing that to LinearAlgebra[LinearSolve]. See the LinearSolve help page for more details.
 • The constructor options provide additional information (readonly, shape, storage, order, datatype, and attributes) to the Matrix constructor that builds the result. These options may also be provided in the form outputoptions=[...], where [...] represents a Maple list.  If a constructor option is provided in both the calling sequence directly and in an outputoptions option, the latter takes precedence (regardless of the order).
 • This function is part of the LinearAlgebra package, and so it can be used in the form MatrixInverse(..) only after executing the command with(LinearAlgebra). However, it can always be accessed through the long form of the command by using LinearAlgebra[MatrixInverse](..).
 • This function has an equivalent shortcut notation, ${A}^{\mathrm{-1}}$, for square Matrices A.

Examples

 > $\mathrm{with}\left(\mathrm{LinearAlgebra}\right):$
 > $\mathrm{MatrixInverse}\left(⟨⟨a,c⟩|⟨b,d⟩⟩\right)$
 $\left[\begin{array}{cc}\frac{{d}}{{a}{}{d}{-}{b}{}{c}}& {-}\frac{{b}}{{a}{}{d}{-}{b}{}{c}}\\ {-}\frac{{c}}{{a}{}{d}{-}{b}{}{c}}& \frac{{a}}{{a}{}{d}{-}{b}{}{c}}\end{array}\right]$ (1)
 > $\mathrm{MatrixInverse}\left(⟨⟨1,2,3,4⟩|⟨2,3,4,0⟩|⟨3,4,0,0⟩|⟨4,0,0,0⟩⟩,\mathrm{method}=\mathrm{integer}\right)$
 $\left[\begin{array}{cccc}{0}& {0}& {0}& \frac{{1}}{{4}}\\ {0}& {0}& \frac{{1}}{{4}}& {-}\frac{{3}}{{16}}\\ {0}& \frac{{1}}{{4}}& {-}\frac{{3}}{{16}}& \frac{{1}}{{64}}\\ \frac{{1}}{{4}}& {-}\frac{{3}}{{16}}& \frac{{1}}{{64}}& \frac{{5}}{{256}}\end{array}\right]$ (2)
 > $A≔⟨⟨\frac{2}{3},-\frac{1}{3},-\frac{1}{3}⟩|⟨-\frac{1}{3},\frac{2}{3},-\frac{1}{3}⟩|⟨-\frac{1}{3},-\frac{1}{3},\frac{2}{3}⟩⟩:$

$A$ has rank 2 and, hence, is singular:

 > $\mathrm{MatrixInverse}\left(A\right)$
 > $B≔⟨⟨1,0,0,0⟩|⟨1,1,0,0⟩|⟨1,1,1,0⟩|⟨1,1,1,1⟩⟩:$
 > $\mathrm{MatrixInverse}\left(B,\mathrm{method}=\mathrm{subs}\right)$
 $\left[\begin{array}{cccc}{1}& {-1}& {0}& {0}\\ {0}& {1}& {-1}& {0}\\ {0}& {0}& {1}& {-1}\\ {0}& {0}& {0}& {1}\end{array}\right]$ (3)
 > $C≔\mathrm{Matrix}\left(\left[\left[16,30,21,25,16\right],\left[68,42,52,36\right],\left[36,37,24\right],\left[53,30\right],\left[20\right]\right],\mathrm{scan}={\mathrm{triangular}}_{\mathrm{upper}},\mathrm{shape}=\mathrm{symmetric},\mathrm{datatype}=\mathrm{float}\right):$
 > $G≔\mathrm{LUDecomposition}\left(C,\mathrm{method}=\mathrm{Cholesky}\right):$
 > $\mathrm{C1}≔\mathrm{MatrixInverse}\left(\left[G\right]\right)$
 ${\mathrm{C1}}{≔}\left[\begin{array}{ccccc}{0.629757785467128}& {-0.410034602076124}& {-0.204152249134948}& {-0.155709342560554}& {0.712802768166090}\\ {-0.410034602076124}& {0.655709342560553}& {0.185121107266436}& {0.192041522491349}& {-1.36245674740484}\\ {-0.204152249134948}& {0.185121107266436}& {0.214532871972318}& {0.0449826989619377}& {-0.494809688581315}\\ {-0.155709342560554}& {0.192041522491349}& {0.0449826989619377}& {0.186851211072664}& {-0.555363321799308}\\ {0.712802768166090}& {-1.36245674740484}& {-0.494809688581315}& {-0.555363321799308}& {3.35899653979239}\end{array}\right]$ (4)
 > $\mathrm{map}\left(\mathrm{fnormal},\mathrm{.}\left(\mathrm{C1},C\right)\right)$
 $\left[\begin{array}{ccccc}{1.}& {-0.}& {-0.}& {-0.}& {-0.}\\ {0.}& {1.000000000}& {0.}& {0.}& {0.}\\ {0.}& {0.}& {1.000000000}& {0.}& {-0.}\\ {0.}& {0.}& {0.}& {1.000000000}& {0.}\\ {0.}& {-0.}& {0.}& {-0.}& {1.}\end{array}\right]$ (5)
 > $p,\mathrm{lu}≔\mathrm{LUDecomposition}\left(C,\mathrm{output}='\mathrm{NAG}'\right):$
 > $\mathrm{C2}≔\mathrm{MatrixInverse}\left(\left[p,\mathrm{lu}\right]\right)$
 ${\mathrm{C2}}{≔}\left[\begin{array}{ccccc}{0.629757785467128}& {-0.410034602076125}& {-0.204152249134948}& {-0.155709342560554}& {0.712802768166091}\\ {-0.410034602076126}& {0.655709342560555}& {0.185121107266437}& {0.192041522491350}& {-1.36245674740485}\\ {-0.204152249134948}& {0.185121107266436}& {0.214532871972318}& {0.0449826989619377}& {-0.494809688581315}\\ {-0.155709342560554}& {0.192041522491350}& {0.0449826989619381}& {0.186851211072665}& {-0.555363321799310}\\ {0.712802768166092}& {-1.36245674740485}& {-0.494809688581316}& {-0.555363321799309}& {3.35899653979240}\end{array}\right]$ (6)
 > $\mathrm{map}\left(\mathrm{fnormal},\mathrm{.}\left(C,\mathrm{C2}\right)\right)$
 $\left[\begin{array}{ccccc}{1.000000000}& {0.}& {-0.}& {0.}& {0.}\\ {-0.}& {1.000000000}& {-0.}& {-0.}& {-0.}\\ {0.}& {-0.}& {1.000000000}& {-0.}& {0.}\\ {-0.}& {0.}& {0.}& {1.000000000}& {-0.}\\ {-0.}& {0.}& {-0.}& {-0.}& {1.000000000}\end{array}\right]$ (7)
 > $M≔⟨⟨1,3⟩⟩$
 ${M}{≔}\left[\begin{array}{c}{1}\\ {3}\end{array}\right]$ (8)
 > $\mathrm{Mi}≔\mathrm{MatrixInverse}\left(M,\mathrm{method}=\mathrm{pseudo}\right)$
 ${\mathrm{Mi}}{≔}\left[\begin{array}{cc}\frac{{1}}{{10}}& \frac{{3}}{{10}}\end{array}\right]$ (9)
 > $\mathrm{Norm}\left(\mathrm{.}\left(M,\mathrm{Mi},M\right)-M\right)$
 ${0}$ (10)
 > $\mathrm{Norm}\left(\mathrm{.}\left(\mathrm{Mi},M,\mathrm{Mi}\right)-\mathrm{Mi}\right)$
 ${0}$ (11)
 > $\mathrm{Norm}\left(\mathrm{HermitianTranspose}\left(\mathrm{.}\left(\mathrm{Mi},M\right)\right)-\mathrm{.}\left(\mathrm{Mi},M\right)\right)$
 ${0}$ (12)
 > $\mathrm{Norm}\left(\mathrm{HermitianTranspose}\left(\mathrm{.}\left(M,\mathrm{Mi}\right)\right)-\mathrm{.}\left(M,\mathrm{Mi}\right)\right)$
 ${0}$ (13)
 > $M≔⟨⟨2.0,1.0,1.0⟩|⟨0.0,3.0,1.0⟩⟩$
 ${M}{≔}\left[\begin{array}{cc}{2.0}& {0.}\\ {1.0}& {3.0}\\ {1.0}& {1.0}\end{array}\right]$ (14)
 > $\mathrm{MatrixInverse}\left(M,\mathrm{method}=\mathrm{pseudo},\mathrm{methodoptions}=\left['\mathrm{tolerance}'=0.000005\right],\mathrm{output}=\left['\mathrm{inverse}','\mathrm{proviso}'\right]\right)$
 $\left[\begin{array}{ccc}{0.454545454545455}& {-0.0454545454545454}& {0.136363636363636}\\ {-0.181818181818182}& {0.318181818181818}& {0.0454545454545456}\end{array}\right]{,}{0.531845515847812}$ (15)

References

 de Boor, Carl. "An Empty Exercise." ACM SIGNUM Newsletter, Vol. 25 Iss. 2. (1990): 2-6.