Application Center - Maplesoft

App Preview:

Matrix Algebra

You can switch back to the summary page by clicking here.

Learn about Maple
Download Application


                                     2      Matrix Algebra 


                                              By :  Ali mohamed Abuoam    


The purpose of this worksheet is to illustrate some of interesting properties of matrix algebra, and to show that how maple can give an interest fun and make mathematics so lovely field as  will be shown among the following sections:  



2.1  Linear Transformations and Matrix Algebra: 

The aim of this section is to know some properties of matrix algebra, which can be related to linear transformations: 


2.1.1 The cube roots of I3: 

Here we want to know, what is the cube root of the 3x3 identity matrix? And how it can be obtained? For this purpose let's consider the following matrix: 


> restart;  with(linalg):

Warning, the protected names norm and trace have been redefined and unprotected

> A:=matrix(3,3,[-1,-4,7,-6,4,-9,-4,1,-3]);


> evalm(A^3),evalm(A^4),evalm(A^6),evalm(A^7),evalm(A^9),evalm(A^10);


So, A cubed is a 3x3 identity matrix, and that ,it is a cube root of the 3x3 identity matrix (see the periodic of 3). But all the 3x3 matrices have not this property, for example: 

> C:=matrix(3,3,[-2,-7,-5,-5,5,2,8,-10,-4]);


> evalm(C^2),evalm(C^3),evalm(C^4),evalm(C^6),evalm(C^9);


And matrix A is not a unique cube root of the 3x3 identity matrix, there are infinite of them.But by using linear transformations structures we can find such matrices which are initially are not cube roots of the 3x3 identity matrix 

Finding the cube root of I3 using linear transformations: 

To do this procedure for matrix C, we first check that matrix C isn't singular, that is, if it has a rank 3: 

> rank(C);

3 (

That is fine, isn't singular. Now call the columns of C: 

> v:=col(C,1);  w:=col(C,2);  r:=col(C,3);




Then, find a linear transformation by calling the augmented matrix of the three vectors (this transformation will turn v into w, w into r and r into v, but not to be 3 times to obtain the origin matrix). Now call the matrix with the column vectors cyclically permuted to the left: 

> T:=augment(w,r,v);


Now find the matrix M as follows: 

> M:=evalm(T&*inverse(C));


Finally to check that this transformation moved us to have a cube root of the 3x3 identity matrix: 

> evalm(M^3);


That is well. To make a general method for linear transformations for random matrices, to obtain the 3x3 identity matrix, do the following: 


> for i from 1 to 7 do
if (rank(N)<>3) then
end if;
end do:

Where "M||i" is the concatination operator, which obtains M1 to M7. And "<>" denotes not equal, for testing if the rank is not equal to 3. If the matrix is singular, then that particular M||i (eg M3) is assigned to the string "singular".
The "next" statement causes excution to skip to the next value of "i" in the do loop (to complete the linear transformation. Instead of "end if" one can type"fi", and instead of "end do" we can type "od".
Now, call M1 to M7 and their powers, and see if M1 to M7 are the cube roots of 3x3 identity matrix (and so if 'i' is larger):


> evalm(M1),evalm(M2),evalm(M3),evalm(M4),evalm(M5),evalm(M6),evalm(M7);


Now to see all these matrices are different cube roots of the 3x3 identity matrix: 

> evalm(M1^3),evalm(M2^3),evalm(M3^3), evalm(M4^3),evalm(M5^3),evalm(M6^3),evalm(M7^3);


  This procedure valids for all matrices, not only random matrices.  


2.1.2  The fourth roots of I3: 

We knew how to obtain the cube roots of 3x3 identity matrix, and we want to know how to obtain the 4th root of the 3x3 identity matrix. Now consider the following example: 

> A:=matrix(3,3,[-3,4,4,-1,7,3,0,-10,-3]);


> evalm(A^2),evalm(A^3),evalm(A^4),evalm(A^8),evalm(A^12);


So, A is the 4th root of the 3x3 identity matrix. But in general, how to obtain the 4th root of the 3x3 identity matrix from any 3x3 matrix? For this purpose, consider the following example:  

> B:=matrix(3,3,[5,2,-3,6,-8,4,2,-4,7]);


To check that B isn't singular: 

> rank(B);

3 (

B isn't singular and also isn't a 4th root of I3, let's now call the column vectors of B: 

> v:=col(B,1); w:=col(B,2); r:=col(B,3);




Now make a different linear transformation from our previous one in the above subsection, which turns v into w, w into -v and leaves r alone. To call the augmented matrix of the column vectors under that linear transformation: 

> M:=augment(w,-v,r);


And, so the matrix N can be called as follows: 

> N:=evalm(M&*inverse(B));


To check if it is a 4th root of 3x3 identity matrix: 

> evalm(N^4), evalm(N^8), evalm(N^12), evalm(N^16);


For such random matrices, one can use the general method:  

> for i from 1 to 7 do
if (rank(T)<>3) then

To call matrices from  S2 to S5 (infact we have infinite set of matrices {Si}, if i is infinite): 

> evalm(S2),evalm(S3),evalm(S4),evalm(S5);


To check that S2 to S5 are different 4th roots of 3x3 identity matrix ( one can write the power 4 or its periodic). The same method can be used to obtain the 4th roots of the n by n identity matrix: 

> evalm(S2^4),evalm(S3^8),evalm(S4^12),evalm(S5^16);





( This procedure is for all matrices, instead of random entries apply the elements of the matrix). 


2.1.3  The cube Roots of O3: 

To see the cube roots of the 3x3 zero matrix, let's create the following matrix: 


> restart;  with(linalg):

Warning, the protected names norm and trace have been redefined and unprotected

> A:=array([[-2,8,4],[-2,8,4],[3,-12,-6]]);


> evalm(A^3),evalm(A^6),evalm(A^12);


So, A is the cube root of the 3x3 zero matrix, and A is singular matrix (eg: of rank less than 3): 


> rank(A);

1 (

But not all matrices of rank one or two are also cube roots of 3x3 zero matrix. 

In general, how to obtain the cube root of the 3x3 zero matrix from any 3x3 matrix? Here, it is again helpful to think in terms of linear transformations. To do this, let's go through the following example: 


> M:=array([[3,-2,4],[-3,1,5],[2,1,6]]);


Construct a linear transformation, which turns the 1rst column into the 2nd, the 2nd col. into the 3rd,and the 3rd col. into the zero vector: 


> N:=augment(col(M,2),col(M,3),vector(3,0));


Now call the matrix C which equal to N multiplied by the inverse of M: 

> C:=evalm(N&*inverse(M));


To see what is the rank of C: 

> rank(C);

2 (

To check that if it is a cube root of the 3x3 zero matrix: 

> evalm(C^3),evalm(C^6),evalm(C^12);


Indeed it is so. 

The following matrix of rank 2, but it isn't any root of the 3x3 zero matrix: 

> S:=matrix(3,3,[-5,7,-17,6,-6,18,0,5,-5]);


> rank(S);

2 (

> evalm(S^2),evalm(S^3),evalm(S^4),evalm(S^5);


But it will be a root of the 3x3 zero matrix when we do that linear transformation. 

To make a general method for any 3x3 random matrix to obtain the cube root of the 3x3 zero matrix(and to check it): 


> for i from 1 to 6 do
if (rank(M)<>3) then

> evalm(N1),evalm(N2),evalm(N3),evalm(N4);


> evalm(N1^3),evalm(N2^3),evalm(N3^3),evalm(N4^3);


                       Again this procedure is for all. 

2.2 Matrices Which are their Own square Roots: 

If the matrix is its own square root, that is well. But how to construct a matrix which is its own square root? To illustrate this let's to introduce the following procedure: 

Let e1,e2 and e3 be the standard basis vectors in `*`(`^`(R, 3))


> restart; with(linalg):

Warning, the protected names norm and trace have been redefined and unprotected

> e1:=vector([1,0,0]); e2:=vector([0,1,0]); e3:=vector([0,0,1]);




Now  make a linear transformation T for the basis vectors: suppose T(e1) = a, T(e2) = b and T(e3) = c. We want this transformation to be singular, so let T(e3) = r1a + r2b, and let this transformation to represents the matrix A. Then A will be composed of the column vectors a,b and c as follows: 

> a:=vector([a1,a2,a3]); b:=vector([b1,b2,b3]);



> c:=evalm(r1*a+r2*b);


> A:=augment(a,b,c);


To have `*`(`^`(A, 2)) = A this implies that the linear transformation T has the following properties: T(a) = a, and T(b) = b, ( T(c) = c will follow from the fact that c is a linear combination of a and b). 

Hence: a = a1e1+a2e2+a3e3, so T(a) = a1T(e1) + a2T(e2) + a3T(e3), and therefore: a = a1a + a2b + a3(r1a + r2b), from this we have: a1 + r1a3 = 1, and a2 + r2a3 = 0. The latter equation tells us that:  

> r2:=-a2/a3;

`assign`(r2, `+`(`-`(`/`(`*`(a2), `*`(a3))))) (2.5)

By the same way, we have; (b2 + r2b3 = 1), and ( b1 + r1b3 = 0).  The latter equation tells : 

> r1:=-b1/b3;

`assign`(r1, `+`(`-`(`/`(`*`(b1), `*`(b3))))) (2.6)

Substituting these results for r1 and r2 into (a1 + r1a3 =1) and (b2 + r2b3 = 1), we find two equations and 6 unknowns:  

> eq1:=a1-b1*a3/b3=1; eq2:=b2-a2*b3/a3=1;

`assign`(eq1, `+`(a1, `-`(`/`(`*`(b1, `*`(a3)), `*`(b3)))) = 1) (2.7)

`assign`(eq2, `+`(b2, `-`(`/`(`*`(a2, `*`(b3)), `*`(a3)))) = 1) (2.7)

Let a1, a2, a3 and b1 be the independent variables of these equations, and let's to solve them for b2 and b3: 

> solve({eq1,eq2},{b2,b3});

{b2 = `/`(`*`(`+`(`*`(b1, `*`(a2)), a1, `-`(1))), `*`(`+`(a1, `-`(1)))), b3 = `/`(`*`(b1, `*`(a3)), `*`(`+`(a1, `-`(1))))} (2.8)

Now let us see what A looks like: 

> evalm(A);


This command will not create the matrix A which we want, so we must use "map" and "eval" to let  'Maple' follows the procedure which we have done: 

> A:=map(eval,A);


Also, define b2 and b3: 

> b2:=(a2*b1+a1-1)/(a1-1);  b3:=b1*a3/(a1-1);

`assign`(b2, `/`(`*`(`+`(`*`(b1, `*`(a2)), a1, `-`(1))), `*`(`+`(a1, `-`(1))))) (2.11)

`assign`(b3, `/`(`*`(b1, `*`(a3)), `*`(`+`(a1, `-`(1))))) (2.11)

Then  call A in its new dress: 

> A:=map(eval,A);


This is an ugly shape for A, to simplify it: 

> A:=simplify(A);


Indeed this matrix is its own square root, let us check: 

> simplify(evalm(A^2));


Without using "simplify" the result will be too large matrix. Now to make any 3 x3 matrix which its own square root: 

> B:=subs({a1=3,a2=-5,a3=7,b1=-8},evalm(A));


To make sure that matrix B is its own square root: 

> evalm(B^2);


So, we have a valid construction. 

2.3  Cramer's Rule and the Adjoint Matrix 

The purpose of this section is to illustrate how cramer's Rule works, and how the adjoint matrix can be used to calculate a matrix's inverse. 

2.3.1  Cramer's Rule: 

Consider a 6 by 6 random matrix: 

> restart; with(linalg):

Warning, the protected names norm and trace have been redefined and unprotected

> B:=matrix(6,6,rand(-6..6));


To make sure that B isn't singular ( i.e the rank = 6 ): 

> rank(B);

6 (3.1.2)

Suppose that this matrix represents a linear system of 6 equations in 6 unknowns as where x is a column vector of the 6 unknowns, and b is a column vector of 6 constants. Also, make b out of random integers: 

> b:=vector(6,rand(-6..6));


Now, call the augmented matrix of (B,b), and apply Gauss-Jordan method: 

> Baug:=augment(B,b);


> gaussjord(Baug);


One can observe that the values of x1 to x6 had been obtained. 

What Cramer's Rule Says?: 

Cramer's Rule says :    xk = det(Ak)/det(B), where Ak is the matrix obtained from B by replacing the kth column vector with b( det: is the determinant function) . To see this: 

> det(B);

173705 (3.1.6)

> A1:=augment(b,col(B,2),col(B,3),col(B,4),col(B,5),col(B,6));


Now to compute the result ( to evaluate x1) using Cramer's Rule: 

> x1:=det(A1)/det(B);

`assign`(x1, -`/`(137726, 173705)) (3.1.8)

This agrees with the previous answer that obtained. And , one can do the same to obtain the other variables. 


Why it works? 

Cramer's Rule works for any non-singular matrix, to illustrate that, let's work with 6 by 6 general matrix: 

> A:=matrix(6,6);

`assign`(A, array(1 .. 6, 1 .. 6, [])) (3.1.9)

> evalm(A);


Let x be the unique solution of ( Ax = c), where x and c are given as follows ( general vectors ): 

> c:=vector(6); x:=vector(6);

`assign`(c, array(1 .. 6, [])) (3.1.11)

`assign`(x, array(1 .. 6, [])) (3.1.11)

> evalm(c);  evalm(x);



" evalm " here for constructing a matrix or a vector, the vectors c and x considered as column vectors. 

Now, construct the identity matrix I6: 

> I6:=matrix(6,6,(m,n)->if (m=n) then 1 else 0 fi);


Now, define Xj to be the identity matrix with its jth column replaced by x ( for example X3 ): 

> X3:=augment(col(I6,1),col(I6,2),x,col(I6,4),col(I6,5),col(I6,6));


Now, think about AX3! For any column of X3 other than column 3, one can get the corresponding column of A, now see what A does to the 2nd column of X3 ( again consider the result as a column vector ): 

> evalm(A&*col(X3,2));


In general, AX3 will give us back A, except for the 3rd column: 

> B3:=evalm(A.X3);


The 3rd column will be Ax = c, which is the same as the definition of matrix B3 used in Cramer's Rule. Then one can demonstrated that AXj = Bj. Taking the determinant of this equation, one obtains:  det(A)det(Xj) =det(Bj). Finally one sees that:   det(Xj) = det(Bj)/det(A). Forexample:  

> det(X3);

x[3] (3.1.17)

> det(B3)/det(A);

x[3] (3.1.18)

This is Cramer's rule. 

One can generates a procedure to implement Cramer's rule( after defining the matrix B and the vector b ): 

> Cramer:=proc(B::matrix,b::vector):
if (detB<>0) then n:=rank(B):
for i from 1 to n do:
ERROR(`System has no solution,detB=0`):

Warning, `detB` is implicitly declared local to procedure `Cramer`

Warning, `n` is implicitly declared local to procedure `Cramer`

Warning, `i` is implicitly declared local to procedure `Cramer`

> X:=Cramer(B,b);

`assign`(X, [-`/`(137726, 173705), `/`(49334, 173705), `/`(24982, 173705), -`/`(22194, 34741), `/`(198757, 173705), -`/`(8226, 34741)]) (3.1.19)

One can compare this result with the previous one for matrix B and vector b,can see the same result. Where the " local procedure", which includes' detB, n and i ' generates the steps of the solution of Cramer's rule. The sample <> means not equal, so in this case the Cramer's rule will work, otherwise the error message will appear " System has no solution " and the determinant of the matrix B is equal to zero. 


2.3.2  The Adjoint Matrix and the Inverse: 

The adjoint of a matrix is defined to be a matrix which all of its elements replaced by their corresponding cofactors, transposed, the "adjoint " command works to evaluate the adjoint of a matrix as follows: 

> N:=matrix(6,6,rand(-6..6));


> adjoint(N);


Here, for example the (1,6) element of the above matrix ( namely 570 ), should be the cofactor of the (6,1) element of N, and it is 5 steps away from the (1,1) element, which is an odd number of steps, so the sign will be negative. To check:    

> cofN61:=-det(minor(N,6,1));

`assign`(cofN61, 570) (

The converse, for example the (2,4) element (-378 ) of the adjoint, which is a cofactor of (4,2) element of N, it is 4 steps away from (1,1) which is an even number of steps, so the sign will be positive: 

> cofN24:=det(minor(N,4,2));

`assign`(cofN24, -356) (

Where the "minor (N,r,c)" function, where N is n by n matrix, returns the determinant of the (n-1) x (n-1) submatrix found by deleting the rth row and the cth column of N: 

> minor(N,4,2);


How to use the adjoint matrix in computing the inverse of that matrix? That can be done in one step: 

> evalm(adjoint(N)/det(N));


Compare that with the direct " inverse " command: 

> inverse(N);



It agreed.