Application Center - Maplesoft

# Vectors and Matrices

You can switch back to the summary page by clicking here.

Vectors  and  Matrices

By:   Ali Mohamed Abu oam          asam295@hotmail.com

Short History:

The beginning of matrices and determinants goes back to the second century BC although traces can be seen back  to the fourth century BC. However it was not until near the end of 17th century that the ideas reappeared and development really got underway.

It is not surprising that the beginning of the matrices and determinants should arise through the study of systems of linear equations.The Babylonians studied problems which lead to simultaneous linear equations and some of  these preserved in clay tables which survive.

The Chinese, between 200 BC and 100 BC ,came much closer to matrices than the Babylonians.Indeed it is fair to say that the text "Nine Chapters of the Mathematical Art"  written during the Han Dynasty gives the first  known example of matrix method.

Definition:

What is a matrix?

Matrices are every where, if you have used a spreadsheet such as exel or Lotus or written a table , you have used a  matrix. Matrices make presentation of numbers clearer and make calculations easier to program. In other words, a matrix is a rectangular array of elements. The elements can be symbolic expressions or numbers.

What is a vector?

A vector is a special type of a matrix that has one row or one column. So there are two types of vectors: row vectors and column vectors.

The purpose of this chapter is to introduce Vectors and Matrices in computations to ease their arithmetic properties.

1.1         Vectors and Planes:

1.1.1  Vectors:

How to construct a vector:

First one can know how to construct a vector:

To construct a row vector:

 > a:=vector([2,0,3]);
 (1)

To construct a column vector:

 > b:=<-5,2,-6>;
 (2)

Or, do the following:

 > b:=Vector([-5,2,-6]);
 (3)

Also, it is useful to know how to construct a row vector or a column vector if its entries are equal:

 > v:=vector(3,-4);
 (4)

We have a row vector each of its entries is -4 with 3 columns. To find a column vector each of its entries forexample is 7 with 5 rows:

 > w:=Vector(5,7);
 (5)

Also, one can construct a general vector:

 > c:=vector(5);    evalm(c);
 (6)

Vector Sum:

Here, use VectorSumPlot command which displays the sum of collection of vectors. The sum is displayed graphically by connecting the head of each vector in the collection to the tail of another vector in the collection. as there are many ways to form these connections, the routine by default displays all connections, effectively producing a lattice whose edges can be followed from the origin point that corresponds to the sum. By default, the sum vector is drawn in black.( first  load the package and then its short commands ):

 > restart;
 > with(Student[LinearAlgebra]):
 Warning, the assigned name GramSchmidt now has a global binding Warning, the protected name . has been redefined and unprotected
 > infolevel[Student]:=1:
 > v:=<2,-5>;
 (7)
 > w:=<-7,1/2>;
 (8)
 > v+w;
 (9)

Where, 'infolevel[Student]: = 1: ' provides the user's informations like entries, functions, integers, animation...etc...

To see this sum in plot:

 > VectorSumPlot(v,w);
 > VectorSumPlot(w,v);

We have the same result because the addition of vectors is commutative ( and of course the vectors have the same dimension). To do for the 3-D vectors:

 > VectorSumPlot(<1,-2,2/3>,<0,1,-2>,<-3,1,0>);

To animate the plot to know the possible paths one at a time can be requested, use the option ' output ':

 > VectorSumPlot(<1,-2,2/3>,<0,1,-2>,<-3,1,0>,output=animation);

Cross Product:

Now,  to know the different between the  'dotproduct' and the 'crossproduct' for the vectors. Let's define the two following random vectors a and b:

 > with(linalg):

 Warning, the protected names norm and trace have been redefined and unprotected
 > a:=vector(3,rand(-3..3));    b:=vector(3,rand(-5..5));
 (10)

Where " rand( ) " determines the range of the integers.

To see the dot product:

 > dp:=dotprod(a,b);    dp1:=dotprod(b,a);
 (11)

The results are the same (commutative). To find the cross product, so as to see the difference between the two products:

 > cp:=crossprod(a,b);   cp1:=crossprod(b,a);
 (12)

The results are differ one is minus an other.

Now use the " CrossProductPlot" command. By default, the result is colored magenta unless we used options for that.

 > CrossProductPlot(<1,-2,0>,<0,-1,3>);

In the" CrossProductPlot " the colors can be adjusted, that is:

 > CrossProductPlot(<1,-2,0>,<0,-1,3>,vectorcolors=[green,black,red]);

The result is red.( the colors are respectively for v1,v2 and the result ). Also other controls are global in nature, such as orientation, title and axes( to use more options ):

 > CrossProductPlot(<1,-2,0>,<0,-1,3>,vectorcolors=[green,black, red], orientation=[-50,50],title="Cross Product with orientation and view specified",view=[-3..3,-3..3,-3..3]);

Where " orientation = [-50,50] " specifies the theta and phi angles of the point in 3-dimensions from which the plot is to be viewed. The default is at a point that is out perpendicular from the screen ( negative z axis ) so that the entire surface can be seen. the point is described in spherical coordinates where theta and phi are angles in degrees. with default 45 degrees in each case.

" view " indicates minimum and maximum coordinates to be displayed.

1.1.2   Planes:

To visualize planes and their related vectors in three dimensions  use " PlanePlot " command which specifies a plane in different ways. It also displays the plane and, optionally, a normal vector, a pair of basis vectors, a vector to a point on the plane from the origin, and the vector from the origin to the closet point on the plane to the origin.

 > restart;
 > with(Student[LinearAlgebra]):
 Warning, the protected name . has been redefined and unprotected
 > infolevel[Student]:=1: To specify the plane by specifying its normal vector and a point on it:
 > PlanePlot(<2,0,-1>,<1,-2,0>);
 normal vector: <2., 0., -1.> point on plane nearest origin: <.8, 0., -.4> basis vectors: <0., 2.236, 0.>, <1., 0., 2.>

To see a pair of vectors that form a basis for the plane (when added to the specified point),  use "showbasis" option:

 > PlanePlot(<1,-2,-1>,<2,-1,1>,showbasis);
 normal vector: <1., -2., -1.> point on plane nearest origin: <.5, -1., -.5> basis vectors: <2., 1.29, -.5798>, <1., -.5798, 2.16>

(1 ): One can specify the plane via a set or  list of vectors (the basis) as follows  :

 > PlanePlot({<5,-1,3>,<1,4,-2>});
 normal vector: <-1.69, 2.197, 3.55> point on plane nearest origin: <0., 0., 0.> basis vectors: <5., -1., 3.>, <1., 4., -2.>

( 2 ): Or one can specify the plane via a vector -valued function of two parameters as follows  :

 > PlanePlot((s,t)-> <1,0,-1>+s*<2,-1,3>+t*<-1,0,2>,showbasis, basisoptions=[color=red],orientation=[50,75],showpoint);
 normal vector: <.3381, 1.183, .169> point on plane nearest origin: <.3704e-1, .1296, .1852e-1> basis vectors: <-.6325, 0., 1.265>, <.7559, -.378, 1.134>

( 3 ): And also, one can specify the plane via an equation or algebraic expression in three variables. In this case the variables' names must be specified unless there are exactly three such names in the first argument:

 > PlanePlot(4*x+2*z=-2,[x,y,z]);
 normal vector: <4., 0., 2.> point on plane nearest origin: <.4, 0., .2> basis vectors: <0., 1., 0.>, <-.4472, 0., .8944>

In either the normal vector form or the basis vectors form if a point in the plane is not specified , then the plane passes through the origin.

1.1.3    Projections:

projecting vectors onto lines and planes is a common and important operation. To visualize this operation, use " ProjectionPlot" command which displays the vector being projected, the line or plane onto which it is being projected,the projected vector, and the projection onto the orthogonal complement of the projection line or plane.

Furthermore, dashed lines from the head of the input vector to the head of its projections onto the line or plane and its projection onto the corresponding orthogonal complement are drawn:

 > ProjectionPlot(<2,-1>,<2/3,1>);
 Vector:     <2, -1> Projection: <.1538, .2308> Orthogonal complement: <1.846, -1.231> Norm of orthogonal complement: 2.219

One can use  " show... " options to suppress these components and optionally request that the vector or vectors used to define the projection line or plane be shown as follows :

 > ProjectionPlot(<2,-1>,<2/3,1>,showbasis=true,showlines=false);
 Vector:     <2, -1> Projection: <.1538, .2308> Orthogonal complement: <1.846, -1.231> Norm of orthogonal complement: 2.219

You can see the difference between the two plots when  the " show... " options used.

Also a plane to be projected on is specified as a set of or a list of vectors :

 > ProjectionPlot(<1,0,-1>,{<-4,-2,3>,<0,-3,1>},showbasis);
 Vector:     <1, 0, -1> Projection: <1.167, .9569e-1, -.7129> Orthogonal complement: <-.1675, -.9569e-1, -.2871> Norm of orthogonal complement: .3459

1.2      Matrices:

1.2.1  How to create a matrix?

( 1 ) IF a matrix has equal entries ( whatever how many rows or columns it has ): suppose we want to create a 4 by 6 matrix of each entry equal 2:

 > matrix(4,6,2);
 (13)

The third argument is a function that generates the elements.

( 2 ) BY using functional operators as the third argument to represent a function in two variables, the first of which is the row index and the second of which is the column index, thus we obtain a matrix the elements of which are the same as the number of the row:

 > f:=(m,n)->m;
 (14)
 > matrix(3,6,f);
 (15)

Where 3 and 6 indicate rows and columns.

To create more complicated matrix, let's make a matrix, the elements of which x to the power of row number times y to the power of the column number:

 > g:=(m,n)->x^m*y^n;
 (16)
 > matrix(4,6,g);
 (17)

If x and y are integers ( forexample x = 2 , y = -5):

 > s:=(m,n)->2^m*(-5)^n;
 (18)
 > N:=matrix(4,6,s);
 (19)

Here are other methods for typing matrices:

 > A:=<<1,2>|<2,-3>|<4,0>>; B:=Matrix(3,3,[[1,2,0],[-1,3,4],[5,0,9]]); S:=Matrix([[2,1,0,-5],[4,5,-7,8]]); M:=array([[1,0,4],[5,-6,3]]); K:=matrix(3,3,[2,3,-9,-7,0,4,5,8,0]);
 (20)

One, can type any symbolic matrix:

 > with(linalg):  with(LinearAlgebra):
 Warning, the previous binding of the name GramSchmidt has been removed and it now has an assigned value
 > A:=Matrix(4,5,symbol=n);
 (21)

Special Types of Matrices:

* Diagonal Matrices:

To make a function that can generate a Diagonal matrix " Di " (where "D" is protected by the program ), the symbol <> denotes not equal, and fi is the finishing of the statement of if :

 > Di:=matrix(4,4,(i,j)-> if i<>j then Di[i,j]:=0 else Di[i,j]:=d||i||j fi);
 Warning, `Di` is implicitly declared local to procedure (22)

* Lower Triagular Matrices:

Also one can make a function that generates the general form of a lower triangular matrix:

 > K:=matrix(4,4,(i,j)->if i
 Warning, `K` is implicitly declared local to procedure (23)

* Upper Triangular Matrices:

A function that uses an if test to generate a general form of an upper triangular matrix is considered:

 > U:=matrix(4,4,(i,j)->if i>j then U[i,j]:=0 else U[i,j] :=u||i||j fi);
 Warning, `U` is implicitly declared local to procedure (24)

1.2.2      Matrices Addition and Multiplication:

 > restart;

 > with(linalg):with(plots):
 Warning, the protected names norm and trace have been redefined and unprotected Warning, the name changecoords has been redefined

Let's first define the two following matrices A and B :

 > A:=matrix(4,4,[2,-3,0,1,-9,5,-6,0,-4,7,8,-3,-7,1,6,-9]);
 (25)
 > matrixplot(A);

Where " matrixplot " command defines a three-dimensional graph where the x and y coordinates represent the rows and columns indices of A respectively. The z values are the corresponding entries of A.

 > B:=<<-4,1,0,2>|<-3,2,-1,-7>|<9,-4,-8,0>|<-2,3,1,-5>>;
 (26)
 > matrixplot(B);

To add A to B use the " evalm " command , to evaluate:

 > evalm(A+B);
 (27)
 > evalm(B+A);
 (28)

We have the same result which means that : matrix addition is commutative

 > matrixplot(%);

The "  % " denotes resent result, that means plot ( B+A ).Also one can add options to the plot, and for addition the matrices must have the same dimension. For multiplication use the following:

 > evalm(A&*B);
 (29)
 > evalm(B&*A);
 (30)
 > matrixplot(A&*B);
 > matrixplot(B&*A);

Here one can observed that algebraic results for the multiplication of A and B are not equal, and so their plots which indicated that matrix multiplication is not commutative, So we used the sign ( &* ) to denote this kind of multiplication ( we can't use " * " only because it denotes commutative multiplication ).

To use some options for the plot:

 > matrixplot(A&*B,heights=histogram,axes=frame,style=patch, title="Matrix Multiplication Plot");

Where "hieghts = histogram " will tally the data into bars of equal area, and creates for each data set, and they are plotted on the same graph, and makes the plot closest to the viewer.

" style = patch ": for colored surface patch rendering.

Also the plot command valids for hilbert and Toeplitz matrices and for all kinds of matrices (these two matrices defined in chapter three ):

 > T:=hilbert(7);
 (31)
 > L:=toeplitz([2,-1,3,-2,1,-4,4]);
 (32)
 > matrixplot(T&*L);

We knew that the multiplication of two matrices can be obtained only when the number of the columns of the first matrix is equal to the number of the rows of the second matrix. To use options:

 > matrixplot(T&*L,heights=histogram,axes=boxed,style=patch, title=" Hilbert and Toeplitz Matrix Mutiplication Plot");

1.2.3       The Identity for Matrix Multiplication:

The Identity for matrix multiplication is a matrix of zeros, with ones on the diagonal. In order to make a functional operator which does this, we need an " if " statement. The syntax is as follows:

 > f:=(m,n)->if(m=n) then 1 else 0 fi;
 (33)

" if " statement must end with " fi " the backwards " if " .

To make 7 by 7 identity matrix :

 > S:=matrix(7,7,f);
 (34)

This identity matrix can be obtained directly like this:

 > I7:=evalm(array(identity,1..7,1..7));
 (35)

In "LinearAlgebra" package, also one can obtain the identity directly as follows:

 > restart; with(LinearAlgebra): id7:=IdentityMatrix(7);
 (36)

Now let's make a 7 by 7 random matrix which integers from 1 to 7 (by functional operator ):

 > g:=rand(1..7);
 (37)
 > G:=matrix(7,7,g);
 (38)

Also one can make a random matrix directly:

 > B:=randmatrix(8,8,entries=rand(1..8));
 (39)

Now, let's check:

 > evalm(G&*I7);
 (40)
 > evalm(I7&*G);
 (41)

The result is the same, and so " I7 " is the identity matrix of matrix G . To see the plot of G and its identity S( I7 ) :

 > matrixplot(G);
 > matrixplot(S);

This plot is for the identity of any 7 by 7 matrix whatever the entries are.

1.3  Gauss Reduction and the Gauss-Jordan Method

1.3.1   Gauss Reduction, the fast way:

 > restart;with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected

First let's define a matrix A which entries are the coefficients of the variables in a set of linear equations:

 > A:=Matrix(3,3,[[0,-1,2],[2,-3,1],[-4,5,2]]);
 (42)

And define a vector that contains the constant terms in the set of linear equations:

 > b:=vector([-7,7,3]);
 (43)

Now we make the augmented matrix:

 > Ab:=augment(A,b);
 (44)

Where "augment" command joins two or more matrices or vectors together horizontally. Then use "gausselim" command which does Gauss Reduction :

 > gausselim(Ab);
 (45)

The Gauss-Jordan Method, the fast way:

we have the augmented matrix which we made previously, then:

 > evalm(Ab);
 (46)

To introduce "gaussjord" command which performs the Gauss-Jordan Method:

 > gaussjord(Ab);
 (47)

It gives the result immediately.

Or use " rref " command to see the reduced row echelon form of (Ab) which equivalent to " gaussjord " command:

rref(Ab);

 (48)

Gauss Reduction, the long way:

Using the same augmented matrix:

 > evalm(Ab);
 (49)

Now we do Gauss Reduction using elementary row operations. First swap rows 1 and 2 using the following command:

 > L:=swaprow(Ab,1,2);
 (50)

Next, add row 1, multiplied by 2, to row 3 and replace row 3 with the result as follows:

 (51)

Then, add row 2, multiplied by -1 to row 3 and replace row 3 with the result:

 (52)

This is the Row-Echelon form of the augmented mat. Let's see if it agrees with what " gausselim" gives us:

 > evalm(L-gausselim(Ab));
 (53)

We have agreement!!!

However, we were just lucky. The method of Gauss reduction is non-unique. Another choice of row operations would lead to a different form of the augmented matrix. See the following:

Gauss Reduction is Non-Unique:

Performing Gauss Reduction the long way for our previous  augmented Matrix. However, we will do things a little differently, and thereby end up with a different form of the Row-Echelon matrix. remember the augmented matrix:

 > evalm(Ab);
 (54)

Instead of swapping rows 1 and 2, swap rows 1 and 3:

 > L1:=swaprow(Ab,1,3);
 (55)

Next: add row 1, multiplied by 1/2, to row 2, and replace row 2 with the result:

 (56)

Multiply row 2, by -2 in order to get rid of the fraction using " mulrow " command:

 > L1:=mulrow(L1,2,-2);
 (57)

Finally, add row 2 to row 3, and replace  row 3 with the result:

 (58)

So, matrix  "L1" is in Row-Echelon form, but it is not the same as which obtained before (L). We can see that:

 > evalm(L-L1);
 (59)

They are definitely not the same, that is Gauss Reduction is non-unique.

The Gauss-Jordan Method, the long way:

Let's consider a 3 by 4 matrix, A, which represents the augmented matrix for a system of 3 equations in 3 variables, and to follow the procedure of the row operations to obtain the Reduced Row-Echelon form of this augmented matrix:

 > restart;with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected
 > A:=<<1,2,3>|<1,4,6>|<2,-3,-5>|<9,1,0>>;
 (60)
 (61)
 (62)
 > A:=mulrow(A,2,1/2);
 (63)
 (64)
 > A:=mulrow(A,3,-2);
 (65)
 (66)
 (67)
 (68)

Now, this is the Reduced Row-Echelon form of the augmented matrix. To see if it agrees with "gaussjord" or not:

 > evalm(A-gaussjord(A));
 (69)

We have agreement.While "Gauss Reduction" is non-unique, we have the "Gauss-Jordan" is unique. The full Gauss-Jordan Method does lead to unique results (we can use similar way to that in case of Gauss Reduction).

1.3.2   How to Tell if a System is Inconsistent:

A system of equations that has no solution is said to be inconsistent.

Now let C be a 4 by 5 matrix which represents the augmented matrix for a system of 4 equations in 4 variables:

 > C:=matrix(4,5,[4,2,-5,-5,-3,4,-3,-2,-5,2,-5,4,0,4,5,-8,6,4,10,0]);
 (70)

To obtain a Row-Echelon form:

 > gausselim(C);
 (71)

So,the system of equations is inconsistent, because it says: " 0 = 4 " which isn't true.

To have a similar case but it is consistent: Let M be a 4 by 5 matrix which represents the augmented matrix for a system of 4 equations in 4 variables:

 > M:=matrix(4,5,[4,2,-5,-5,-3,4,-3,-2,-5,2,-5,4,0,4,5,-8,6,4,0,-4]);
 (72)

To find a Row-Echelon form:

 > gausselim(M);
 (73)

Here the system is consistent. Don't get confused! It is not saying " -10 = 0", but it is saying -10x_4 = 0,(i.e; -10 multiplied by x4 = 0). That means x_4 = 0, so the system can be solved completely. Using "Gauss-Jordan Method":

 > gaussjord(M);
 (74)

1.3.3    How to Tell a System has Free Variables:

A system With One Free Variable:

Now, consider a 4 by 5 matrix , A, which represents the augmented matrix for a system of 4 equations in 4 variables:

 > restart; with(linalg):  with(plots):
 Warning, the protected names norm and trace have been redefined and unprotected Warning, the name changecoords has been redefined
 > A:=matrix(4,5,[4,2,-5,-5,-3,4,-3,-2,-5,2,-5,4,0,4,5,8,-1,-7,-10,-1]);
 (75)

The Row-Echelon form of this matrix A:

 > gausselim(A);
 (76)

There is no pivot in column 3, which indicates that the 3rd variable x_3 is free. If we use Gauss-Jordan Method, we find the other three variables in terms of the free variable:

 > gaussjord(A);
 (77)

A system with two Free Variables:

Consider a 4 by 5 matrix, B, which represents the augmented matrix for a system of 4 equations in 4 variables:

 > B:=matrix(4,5,[4,2,-5,-5,-3,-1,6,-5,-1,2,-5,4,0,4,5,9,-2,-5,-9, -8]);
 (78)

The Row-Echelon form of a matrix B:

 > gausselim(B);
 (79)

There are no pivots in columns 3 and 4, which indicates that the 3rd and the 4th variables (x_3,x_4), are free. If  we use Gauss-Jordan Method, we'll find the other two variables in term of the two free variables:

 > gaussjord(B);
 (80)

1.3.4    Using Elementary Matrices In Computations:

The purpose of this section is to show how to use Elementary Matrices for performing elementary row operations on matrices in computations. This will be illustrated by using Elementary Matrices to perform Gauss Reduction on an augmented matrix:

A Sample System To work with:

Use the following system: (our matrix A and Vector b which used before in Gauss Reduction):

 > restart; with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected
 > A:=matrix(3,3,[0,-1,2,2,-3,1,-4,5,2]);
 (81)
 > b:=vector([-7,7,3]);
 (82)
 > Ab:=augment(A,b);
 (83)

That is our system to be used.

Gauss Reduction, Using Elementary Matrices:

First we call an elementary 3 by 3 matrix L which is an identity 3 by 3 matrix:

 > f:=(i,j)->if (i=j)then 1 else 0 fi;
 (84)
 > L:=matrix(3,3,f);
 (85)
 > N1:=swaprow(L,1,2);
 (86)
 > K:=evalm(N1&*Ab);
 (87)

Then do this: ( multiply row 1 of L by 2 and add to row 3 ):

 (88)
 > K:=evalm(N2&*K);
 (89)

Finally we do the following:

 (90)
 > K:=evalm(N3&*K);
 (91)

Gauss Reduction, Using Elementary Row Operations:

Use Gauss Reduction to reduce the matrix "Ab" to Row Reduction form.

 > evalm(Ab);
 (92)
 > M:=swaprow(Ab,1,2);
 (93)

Multiply row 1 by 2 and add the result to row 3:

 (94)

Multiply row 2 by -1 and add the result to row 3:

 (95)

Now  check this with the previous result obtained before:

 > evalm(K-M);
 (96)

The two methods agreed.

Using Elementary Matrices to do Gauss Reduction in one Step:

One can ask what is the use of elementary matrices? It seems a rather inefficient way to do Gauss Reduction when we can apply the row operations directly to the augmented matrix. However, elementary matrices have theoretical interest, because they can be used to do Gauss Reduction in one step, as follows:

 > evalm(N3&*N2&*N1&*Ab);
 (97)

Which is the same as "gausselim" command:

 > gausselim(Ab);
 (98)

1.3.5        Finding the Inverse of a Matrix:

The purpose of this section is to show how to find the inverse of a matrix, with "inverse" command, and using Gauss-Jordan Method.

Now make a 6 by 6 matrix to work with:

 > T:=<<2,3,-5,-4,2,-1>|<4,-3,0,1,-2,-1>|<-7,4,-5,0,3,-2>| <-9,0,2,-4,1,3>|<-6,0,-4,1,2,-7>|<8,-1,0,2,-4,-6>>;
 (99)

To find the inverse of  T using "inverse" command:

 > Q:=inverse(T);
 (100)

To make sure that Q is indeed the inverse of  T:

 > evalm(Q&*T);
 (101)
 > evalm(T&*Q);
 (102)

That is okay.

Finding the Inverse Using Gauss-Jordan Method:

First let's find a 6 by 6 identity matrix ( of  T ), call it C:

 > C:=matrix(6,6,(i,j)->if (i=j)then 1 else 0 fi);
 (103)

Now obtain the augmented matrix TC:

 > TC:=augment(T,C);
 (104)

Then use Gauss-Jordan Method, the inverse of  T will appear to the right of the partition:

 > TCgj:=gaussjord(TC);
 (105)

To extract the inverse from the above, use "submatrix" command as follows:

 > Tinv:=submatrix(TCgj,1..6,7..12);
 (106)

Tinv : denotes the inverse of the matrix T, (1..6) denotes the dimension of the matrix and (7..12) the columns of the matrix that required.  And one can see that the result is equal to the inverse Q which obtained before.

1.3.6    The Subspaces of a Matrix and Their Bases:

The purpose of this section is to illustrate the matrix concepts of row space, column space, and nullspace.

Now define a random 5 by 5 matrix whose elements are integers between -6 and 6:

 > restart; with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected
 > M:=matrix(5,5,rand(-6..6));
 (107)

The probability that this matrix is singular is only about 3 in 400. However, we like to work with a singular matrix,  otherwise the dimension of the nullspace will be zero. Therefore, let's make the last two rows dependent on the first three as follows:

 > N:=matrix([row(M,1),row(M,2),row(M,3),evalm(3*row(M,1)+row(M,2)+ row(M,3)),evalm(-row(M,1)+3*row(M,2)+row(M,3))]);
 (108)

Now the matrix N will be a singular matrix.

Finding The nullspace of N through row reduction:

What is the nullspace ? The nullspace is a set of solutions of the homogeneous equations. To find the solutions, augment the matrix N with the zero vector, and use the Gauss-Jordan Method:

 > s:=vector(5,0);
 (109)
 > N0:=augment(N,s);
 (110)
 > gaussjord(N0);
 (111)

One can see that there are two free variables, as expected, because we set up N so that the 4th and the 5th rows would be dependent on the 1st, 2nd and the 3rd rows. Let's call the free variables r and s, then we have: x1 = 13r/11 + 12s/11,   x2 = -5r/11 - 19s/11,  x3 = -13r/11- 35s/22,  x4 = r and x5 = s. Or, in column vector notation, we have:

 > v1:=matrix(5,1,[13/11,-5/11,-13/11,1,0]); v2:=matrix(5,1,[12/11,-19/11,-35/22,0,1]);
 (112)
 > X:=r*evalm(v1)+s*evalm(v2);
 (113)

Which are the vectors that span the nullspace of the matrix N. In other words, they form a basis for the nullspace.

Finding the nullspace of N using the "nullspace" command:

We can directly find the nullspace of N by using "nullspace" command as shown below:

 > nsp:=nullspace(N);
 (114)

How that can be compared to what we obtained through row reduction?

 > evalm(v1); evalm(v2);
 (115)

This is not the same basis! However, we can show that the two basis are equivalent, if one can be expressed in terms of a linear combination of the other. To form that, make a matrix out of the 4 columns vectors and use the Gauss-Jordan Method:

 > w1:=nsp[1];
 (116)
 > w2:=nsp[2];
 (117)
 > K:=augment(v1,v2,w1,w2);
 (118)
 > gaussjord(K);
 (119)

We therefore, see that w1 = -11v1/5,   w2 = v2-19v1/5. Indeed, the basis are equivalent. In other words they span the same space.

The Column Space Of N:

First: To find the column space of the matrix N through row reduction, let's call our matrix N;

 > evalm(N);
 (120)

Now, we will use "gausselim" command to obtain Row-Echelon form:

 > gausselim(N);
 (121)

Here columns 1,2 and 3 form a basis for column space of N ( there are pivots only in them), to present these vectors:

 > b1:=col(N,1);  b2:=col(N,2); b3:=col(N,3);
 (122)

So, the column space is spanned by b1, b2 and b3.

Second: Let's find the column space of N by using "colspace" command:

The "colspace" (rowspace) is a function returns a set of column (row) vectors that form a basis for the vector space that spanned by the columns (rows) of a matrix. The vectors are returned in canonical form with leading entries 1. The column space (rowspace) of a zero matrix is an empty set. To examine this do the following:

 > A:=matrix(2,2,[0,0,0,0]);
 (123)
 > colspace(A);
 (124)

Its clear that the column space of matrix A is an empty set because it is a zero matrix.

Now to find the column space of matrix N using "colspace" command, say it 'csp':

 > csp:=colspace(N);
 (125)

It seemed not the same basis we obtained before, to compare that and to show the two basis are equivalent, we expressed one in terms of linear combinations of the other: that is:

 > csp1:=csp[1];  csp2:=csp[2];  csp3:=csp[3];
 (126)

Now, we call the augmented matrix of the two basis:

 > G:=augment(b1,b2,b3,csp1,csp2,csp3);
 (127)

Then use "gaussjord" command

 > gaussjord(G);
 (128)

Here, we got that:

csp1 = -b2/5 - b3/5,           csp2 = -3b1/11 + 2b2/11 + 3b3/11,     csp3 = -2b1/11 + 3b2/55 + 31b3/110.

So, the basis are equivalent and they span the same space.

The Column Space Of N By Finding the row Space of the transpose of N:

First, define the transpose of a matrix A:

1- If A is an m x n matrix, the result is an n x m matrix, where the [i,j]th element of the result is equal to the [j,i]th element of the element of A. The result inherits the indexing function (for example, diagonal or spare) of A, if it has one.

2- If A is a vector, then it is treated as if it was a column vector, "transpose(A)" would therefore be a row vector.

To do that:

 > restart;   with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected
 > A:=matrix(3,4,[-1,0,2,5,-8,-9,6,4,3,0,-3,-7]);
 (129)
 > At:=transpose(A);
 (130)

Now: To continue finding the column space of N by finding the row space of the transpose of A. let's find the transpose of N:

 > Nt:=transpose(N);
 (131)

We knew that the row space of Nt is the column space of N, hence, we find the row space of Nt using row reduction:

 > Mt:=gaussjord(Nt);
 (132)

We see that rows 1, 2 and 3 are nonzero, so they form the basis of row space of Nt, which is the column space of N:

 > b1:=row(Mt,1);  b2:=row(Mt,2);  b3:=row(Mt,3);
 (133)

to compare our result with the column space of N obtained by "colspace" command:

 > colspace(N);
 (134)

Obviously we have the same result.

The Row Space Of N:

Finding The Row Space of N through Row Reduction:

First, call our old matrix N:

 > restart;  with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected
 > evalm(N);
 (135)

Then,  do Gauss Reduction to Row-Echelon form:

 > M:=gausselim(N);
 (136)

The three nonzero rows 1, 2 and 3 form the basis for the row space. So the row space is given by the following three vectors:

 > b1:=row(M,1);  b2:=row(M,2);  b3:=row(M,3);
 (137)

Finding the row space of N using the "rowspace" command:

 > rsp:=rowspace(N);
 (138)

To compare the two results, one can be expressed in terms of linear combination of the other, to make this, make a matrix out of the vectors written as column vectors, and use the gauss-Jordan method:

 > rsp1:=rsp[1];  rsp2:=rsp[2];  rsp3:=rsp[3];

 (139)

Then we call the augmented matrix M1 as follows:

 > M1:=augment(b1,b2,b3,rsp1,rsp2,rsp3);
 (140)

Finally we use "gaussjord" command:

 > gaussjord(M1);
 (141)

We obtained that:

rsp1 = -b1/4 - 3b2 + b3/11,         rsp2 = 4b2 - 7b3/55,       rsp3 = b3/110.

Which, obviously indicated that the basis are equivalent. To see whether we can obtain the basis form By "gaussjord" direct command with the matrix or not ?

 > gaussjord(N);
 (142)

So, the first three row vectors of the Reduced Row -Echelon form the basis of the row space which we obtained before.

1.3.7  Finding the basis for a set of vectors:

We knew in the previous sections, how to find a basis for the nullspace, the column space and the row space of a matrix. To know how to find a basis for any set of vectors, we put that set of vectors into a matrix, as column vectors, and do the same as we have done before for a matrix. To show that, make a set of the 6 following vectors:

 > restart;   with(linalg):
 Warning, the protected names norm and trace have been redefined and unprotected
 > v1:=vector([1,-2,0,5,0,-4,3]); v2:=vector([-3,2,0,1,0,-4,5]); v3:=vector([0,-2,0,0,6,-5,4]); v4:=vector([2,-1,-3,0,7,0,1]); v5:=vector([3,-5,4,0,0,-2,0]); v6:=vector([-4,0,-1,0,-2,1,-2]);
 (143)

Now:  find the basis using the column space of a matrix.

Call the augmented matrix of these vectors:

 > V:=augment(v1,v2,v3,v4,v5,v6);
 (144)

Finally use "gaussjord" command to find the basis:

 > gaussjord(V);
 (145)

Oh!! there are pivots in all the columns, so all the vectors form a basis for the space spanned by them. To check this by using "basis" command:

 > basis({v1,v2,v3,v4,v5,v6});
 (146)

So, the two methods agreed. Also one can find the basis using the row space of the matrix similar to what we have known before.

1.3.8  The Rank of a Matrix:

Definition: The rank of a matrix is the dimension of the row space, which is also the dimension of the column space.( The rank of a matrix = the number of its columns - the dimension of its nullspace ). Now, one can use "rank" command:

 > rank(V);
 (147)

In this case the nullspace of the matrix V ( the augmented matrix of the set of our six vectors) is an empty set. so that its dimension is zero( has a nullity of zero ).

 > nullspace(V);
 (148)

Now ( remember our old matrix N ) we expect that its rank is 3 because it has a nullity of 2:

 > rank(N);
 (149)

Also, we can make use of "gausselim" command to find the rank of the matrix and its determinant by adding some options to the command "gausselim(V,'r','d')", where r is the rank and d is the determinant of the submatrix (V,1..n,1..n), where n is the rank of the matrix V.

 > gausselim(V,'r','d');
 (150)
 > r;
 (151)
 > d;
 (152)

To do that for the matrix N:

 > gausselim(N,'r','d');
 (153)
 > r;
 (154)
 > d;
 (155)
 > det(N);
 (156)

Here we compared the result for the determinant of N by "det"command because N is a square matrix, for non-square matrices their ranks make them to be square matrices.