Application Center - Maplesoft

App Preview:

Lesson 11: First-Order Linear Systems

You can switch back to the summary page by clicking here.

Learn about Maple
Download Application


 

Lesson11.mw

ORDINARY DIFFERENTIAL EQUATIONS POWERTOOL

Lesson 11 -- First-Order Linear Systems

Prof. Douglas B. Meade

Industrial Mathematics Institute

Department of Mathematics

University of South Carolina

Columbia, SC 29208

URL:   http://www.math.sc.edu/~meade/

E-mail:  meade@math.sc.edu

Copyright  2001  by Douglas B. Meade

All rights reserved

-------------------------------------------------------------------

>

Outline of Lesson 11

11.A Equilibrium Analysis

                11.A-1 Nullclines

                11.A-2 Equilibrium Solutions

11.B Graphical Analysis

                11.B-1 Direction Fields

                11.B-2 Phase Portraits

                11.B-3 Solution Curves

11.C Analytic Solutions

                11.C-1 One-Step Solutions using dsolve

                11.C-2 Eigenvalue Analysis

                                   Example 1 Real and Distinct Eigenvalues

                                   Example 2 Complex Eigenvalues

                                   Example 3 Repeated Eigenvalues

11.D Numerical Solutions

>

Initialization

> restart;

> with( DEtools ):

> with( plots ):

> with( LinearAlgebra ):

> with( student ):

Warning, the name changecoords has been redefined

>

11.A Equilibrium Analysis

11.A-1 Nullclines

In the xy -plane, an isocline for the differential equation dy/dx = f(x, y) is a curve along which dy/dx has a constant value.  Thus, an isocline is a level curve of the function f(x, y) .

If the contstant value of f(x, y) on the isocline is zero, then the isocline is called a nullcline.

A nullcline for a two-dimensional first-order system of differential equations is a curve along which at least one of the dependent variables (x or y ) does not change, i.e., where dx/dt = 0 or dy/dt = 0 .

The nullclines for the system

dx/dt = F(x, y)

dy/dt = G(x, y)

are found by graphing the curves y(x) defined implicitly by the equations F(x, y) = 0 and G(x, y) = 0 .  While it can be difficult to give a general description of the nullclines for a general nonlinear system, the nullclines for a linear system are always straight lines that intersect at an equilibrium solution.

Consider the general first-order linear system with constant coefficients, namely,

dx/dt = a*x(t)+b*y(t)+f

dy/dt = c*x(t)+d*y(t)+g

which are rendered in Maple via

> ode1 := diff( x(t), t ) = a*x(t) + b*y(t) + f:

> ode2 := diff( y(t), t ) = c*x(t) + d*y(t) + g:

> sys := { ode1, ode2 };

sys := {diff(x(t), t) = a*x(t)+b*y(t)+f, diff(y(t), t) = c*x(t)+d*y(t)+g}

>

The nullclines of the system are the straight lines whose equations are

> nullcline := eval( sys, { diff(x(t),t)=0, diff(y(t),t)=0, x(t)=x, y(t)=y } );

nullcline := {0 = a*x+b*y+f, 0 = c*x+d*y+g}

>

The change from x(t) to x and from y(t) to y is made to simplify the graphical display of the nullclines.

For a concrete example, select the following values of the coefficients.

> value_of_constants := {a=1,b=-2,c=3,d=1,f=2,g=1};

value_of_constants := {a = 1, b = -2, d = 1, c = 3, f = 2, g = 1}

>

Then the system is

> sys1 := eval( sys, value_of_constants );

sys1 := {diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1}

>

and the nullclines are

> nullcline1 := eval( sys1, { diff(x(t),t)=0, diff(y(t),t)=0, x(t)=x, y(t)=y } );

nullcline1 := {0 = x-2*y+2, 0 = 3*x+y+1}

>

or, expressing each line in slope-intercept form.

> map( isolate, nullcline1, y );

{y = 1+1/2*x, y = -1-3*x}

>

These nullclines are graphed in Figure 11.1, created from an implicitplot of each nullcline. (The implicitplot command is needed to handle cases where the nullcline is a vertical line.)

> WINDOW := x=-5..5, y=-5..5:

> plot_nullcline := display( map( implicitplot, nullcline1, WINDOW, color=GREEN ) ):

> display(plot_nullcline, title="Figure 11.1");

[Plot]

>

11.A-2 Equilibrium Solutions

Equilibrium solutions (if they exist) occur at the intersection of nullclines corresponding to each of the differential equations in the system. For a general nonlinear system, special care must be taken to ensure that an intersection point of nullclines is actually an equilibrium solution. Fortunately, for a two-dimensional linear system, any intersection of the nullclines is automatically an equilibrium solution.

In terms of the example introduced above, namely,

> sys1;

{diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1}

>

with nullclines

> nullcline1;

{0 = x-2*y+2, 0 = 3*x+y+1}

>

the intersection of the nullclines can be approximated from Figure 11.1, the plot of the nullclines. The exact solution is found to be

> equil_soln := solve( nullcline1, {x,y} );

equil_soln := {x = (-4)/7, y = 5/7}

>

or, expressed as the coordinates of a point,

> equil_point := eval( [x,y], equil_soln );

equil_point := [(-4)/7, 5/7]

>

Graphically, the equilibrium solution is the intersection of the two nullclines, as shown in Figure 11.2.

> plot_equil := pointplot( equil_point, symbol=CIRCLE, symbolsize=18, color=BLUE ):

> display([plot_nullcline,plot_equil], title="Figure 11.2");

[Plot]

>

11.B Graphical Analysis

11.B-1 Direction Fields

The direction field (see Lesson 1) can be a source of much information about a system of ODEs, even when an explicit formula for the solution is not known. For starters, the direction field for a system shows the direction in which the solution moves at any point in space.

Again consider the system

> sys1;

{diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1}

>

first seen in Lesson 11, Section A .  The aim here is to construct and study the direction field for this system.

Even though this is an autonomous system (recall from Lesson 1 that the independent variable does not explicitly appear), Maple's DEplot command requires an interval for the independent variable``(t) .  The specific interval chosen is irrelevant, provided the endpoints are numeric.  Hence, pick the domain

> DOMAIN := t = 0 .. 1:

>

The direction field for this system appears in Figure 11.3.

> plot_direction_field := DEplot( sys1, [x(t),y(t)], DOMAIN, WINDOW, arrows = MEDIUM ):

> display(plot_direction_field, title="Figure 11.3");

[Plot]

>

From this picture it is evident that solutions spiral outward from a point in the second quadrant. To see that the center of these spirals is the equilibrium solution, superimpose the plot of the equilibrium solution and the nullclines, as done in Figure 11.4.

> display( [plot_direction_field, plot_equil, plot_nullcline], title="Figure 11.4" );

[Plot]

>

The nullclines show exactly where the individual components pass through critical points and, hence, where they transition between increasing and decreasing functions (of t ). In practice, if a computer is not available to create the direction field, the constancy of the sign of the derivative of each component in each region created by the nullclines provides insight into the qualitative behaviour of solutions to a linear, autonomous system of ODEs.

>

11.B-2 Phase Portrait

A phase portrait for an autonomous system of ODEs displays solutions to the system in "phase space" where solutions x(t) and y(t) are treated as parametric equations for the curve y(x) . (See Lesson 1.)  

To create a phase portrait in Maple, the DEplot command is recommended. (The phaseportrait command, also from the DEtools package, is essentially the same, but there is no reason to learn and remember a new command when simple modifications to DEplot suffice.)

One solution curve is generated for each initial condition. For a two-dimensional system in x(t) and y(t) , each initial condition can be specified either as triples of numbers [t[0], x(t[0]), y(t[0])] or as pairs of equations [x(t[0]) = x[0], y(t[0]) = y[0]] . For example, initial conditions at the points with integer coordinates along the axes can be specified as

> ICy := [0,0,i] $ i=-5..5;

ICy := [0, 0, -5], [0, 0, -4], [0, 0, -3], [0, 0, -2], [0, 0, -1], [0, 0, 0], [0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [0, 0, 5]

>

for the y -axis,

> ICx := [x(0)=i,y(0)=0] $ i=-5..5;

ICx := [x(0) = -5, y(0) = 0], [x(0) = -4, y(0) = 0], [x(0) = -3, y(0) = 0], [x(0) = -2, y(0) = 0], [x(0) = -1, y(0) = 0], [x(0) = 0, y(0) = 0], [x(0) = 1, y(0) = 0], [x(0) = 2, y(0) = 0], [x(0) = 3, y...ICx := [x(0) = -5, y(0) = 0], [x(0) = -4, y(0) = 0], [x(0) = -3, y(0) = 0], [x(0) = -2, y(0) = 0], [x(0) = -1, y(0) = 0], [x(0) = 0, y(0) = 0], [x(0) = 1, y(0) = 0], [x(0) = 2, y(0) = 0], [x(0) = 3, y...

>

for the x -axis, and

> IC := ICx, ICy:

>

as a composit list for both axes.

As in the previous uses of DEplot, an interval of values for the independent variable is required. Unlike the use of DEplot for the creation of direction fields, this argument does have a meaning for phase portraits. The time interval provides limits for the numerical methods used to obtain approximate solutions for each initial condition.

Using the domain

> DOMAIN;

t = 0 .. 1

>

for the independent variable, the phase portrait for this system and these initial conditions appears in Figure 11.5.

> DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC ],

>        arrows=NONE, scene=[ x, y ], linecolor=BLUE, title="Figure 11.5" );

[Plot]

>

To restrict the solutions to a pre-determined "window", include a viewing window such as

> WINDOW;

x = -5 .. 5, y = -5 .. 5

>

This gives Figure 11.6.

> plot_phase_portrait := DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC ], WINDOW,

>                               arrows=NONE, scene=[ x, y ], linecolor=BLUE ):

> display(plot_phase_portrait, title="Figure 11.6");

[Plot]

>

A simple change to the arrows= option includes the direction field in the phase portrait, as seen in Figure 11.7.

> DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC ], WINDOW,

>        arrows=SMALL, scene=[ x, y ], linecolor=BLUE, title="Figure 11.7" );

[Plot]

>

In Figure 11.8, this plot is superimposed on the direction field, nullclines, and equilibrium solution.

> display( [plot_direction_field, plot_nullcline,

>          plot_equil, plot_phase_portrait], title="Figure 11.8" );

[Plot]

>

Even though the the independent variable``(t) is not explicitly displayed in a phase portrait, it is implicitly present in that the solution curves are traveled at different speeds. To show the coordination between solutions from different initial conditions, specify the linecolor= option with a function or expression that depends on the independent variable. For example, this is done in Figure 11.9.

> DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC ],

>        arrows=NONE, scene=[ x, y ], linecolor=t, title="Figure 11.9" );

[Plot]

Note that it is not necessary to be too concerned about the specific range of values of the linecolor= option; DEplot automatically normalizes these values to the interval [0,1].

Figure 11.9 animates the tracing of the trajectories in the phase portrait.

> WINDOW2 := x=-20..20, y=-20..20:

> phase_portrait := T -> DEplot( sys1, [x(t),y(t)], t=0..T, [ IC ],

>                               WINDOW2, arrows=SLIM, scene=[ x, y ],

>                               linecolor=BLUE, stepsize=0.1 ):

> display( [ phase_portrait(0.1), seq( phase_portrait(i/4), i=1..10) ], insequence=true, title="Figure 11.9" );

[Plot]

>

Note that to provide a consistent resolution for all solution curves in the animation, the stepsize= option has been used.

>

11.B-3 Solution Curves

For the two-dimensional linear system, a solution curve is a plot of one component of the solution as a function of the independent variable and is produced using the DEplot command. The only changes to the arguments are removing the specification of the display window size and the arrows= argument, and changing the scene= argument to specify the component to be plotted.

For example, when the initial condition is

> IC := [x(0)=0,y(0)=1];

IC := [x(0) = 0, y(0) = 1]

>

Figure 11.10, containing graphs of the x - and y -components of the particular solution satisfying this initial condition, is created and displayed with

> plot_soln_x := DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC ],

>                       scene=[ t, x ], linecolor=black ):

> plot_soln_y := DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC ],

>                       scene=[ t, y ], linecolor=red ):

> display( [ plot_soln_x, plot_soln_y ], labels=[`t`,``], title="Figure 11.10" );

[Plot]

>

If multiple initial conditions are specified as in

> IC2 := [x(0)=0,y(0)=1], [x(0)=0,y(0)=-1];

IC2 := [x(0) = 0, y(0) = 1], [x(0) = 0, y(0) = -1]

>

one solution curve is produced for each initial condition.  The individual plots are created with

> plot_soln_x2 := DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC2 ],

>                        scene=[ t, x ], linecolor=[BLUE,GREEN] ):

> plot_soln_y2 := DEplot( sys1, [x(t),y(t)], DOMAIN, [ IC2 ],

>                        scene=[ t, y ], linecolor=[BLUE,GREEN] ):

>

and can be displayed together, as in Figure 11.11.

> display( [ plot_soln_x2, plot_soln_y2 ] , labels=[`t`,``], title="Figure 11.11" );

[Plot]

>

To eliminate some of the clutter from the multiple plots, it is sometimes advantageous to display each pair of solutions in side-by-side plots, as seen in the following figure.

> display( array([plot_soln_x2,plot_soln_y2]), scaling=constrained );

[Plot]

>

Note how the colors are used to indicate pairs of solutions corresponding to the same initial condition.  The blue curve on the left is the graph of x(t) whereas the blue curve on the right is the graph of y(t) both of which correspond to the first initial condition.  The pair of green curves corresponds to the second initial condition.

>

11.C Analytic Solutions

11.C-1 One-Step Solutions using dsolve

For a first-order linear system of ODEs, Maple's dsolve command should be able to find the general solution to the system and the particular solution for any initial condition.

For example, for the system of ODEs

> sys1;

{diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1}

>

the general solution is

> gen_soln := dsolve( sys1, {x(t),y(t)} );

gen_soln := {x(t) = -4/7+exp(t)*(sin(6^(1/2)*t)*_C2+cos(6^(1/2)*t)*_C1), y(t) = 5/7+1/2*exp(t)*6^(1/2)*(-_C2*cos(6^(1/2)*t)+_C1*sin(6^(1/2)*t))}

>

more easily grasped if written as

> xt = eval(x(t), gen_soln);
yt = eval(y(t), gen_soln);

xt = -4/7+exp(t)*(sin(6^(1/2)*t)*_C2+cos(6^(1/2)*t)*_C1)

yt = 5/7+1/2*exp(t)*6^(1/2)*(-_C2*cos(6^(1/2)*t)+_C1*sin(6^(1/2)*t))

>

The solution satisfying an initial condition, say

> IC;

[x(0) = 0, y(0) = 1]

>

can be found by substituting the initial condition into the general solution, thereby producing the equations

> q1 := eval( eval( gen_soln, t=0 ), IC );

q1 := {1 = 5/7-1/2*6^(1/2)*_C2, 0 = -4/7+_C1}

>

which can then be solved for the constants of integration, namely,

> q2 := solve( q1, {_C1,_C2} );

q2 := {_C1 = 4/7, _C2 = -2/21*6^(1/2)}

>

The corresponding solution of this system of differential equations is

> part_soln := eval( gen_soln, q2 );

part_soln := {y(t) = 5/7+1/2*exp(t)*6^(1/2)*(2/21*6^(1/2)*cos(6^(1/2)*t)+4/7*sin(6^(1/2)*t)), x(t) = -4/7+exp(t)*(-2/21*sin(6^(1/2)*t)*6^(1/2)+4/7*cos(6^(1/2)*t))}

>

again more easily grasped if written as

> xt = eval(x(t), part_soln);
yt = eval(y(t), part_soln);

xt = -4/7+exp(t)*(-2/21*sin(6^(1/2)*t)*6^(1/2)+4/7*cos(6^(1/2)*t))

yt = 5/7+1/2*exp(t)*6^(1/2)*(2/21*6^(1/2)*cos(6^(1/2)*t)+4/7*sin(6^(1/2)*t))

>

Alternatively, the solution to the initial value problem

> IVP := sys1 union convert(IC,set);

IVP := {x(0) = 0, diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1, y(0) = 1}

>

could be found with the single command

> ivp_soln := dsolve( IVP, {x(t),y(t)} );

ivp_soln := {y(t) = 5/7+1/2*exp(t)*6^(1/2)*(2/21*6^(1/2)*cos(6^(1/2)*t)+4/7*sin(6^(1/2)*t)), x(t) = -4/7+exp(t)*(-2/21*sin(6^(1/2)*t)*6^(1/2)+4/7*cos(6^(1/2)*t))}

>

While the odetest command is unable to check solutions for a system of ODEs, it is not difficult to verify that the two solutions presented above are identical.  The equivalence of x(t) is established with

> q3 := eval( x(t), part_soln ) = eval( x(t), ivp_soln );

> evalb(q3);

q3 := -4/7+exp(t)*(-2/21*sin(6^(1/2)*t)*6^(1/2)+4/7*cos(6^(1/2)*t)) = -4/7+exp(t)*(-2/21*sin(6^(1/2)*t)*6^(1/2)+4/7*cos(6^(1/2)*t))

true

>

whereas that of y(t) is established with

> q4 := eval( y(t), part_soln ) = eval( y(t), ivp_soln );

> evalb(q4);

q4 := 5/7+1/2*exp(t)*6^(1/2)*(2/21*6^(1/2)*cos(6^(1/2)*t)+4/7*sin(6^(1/2)*t)) = 5/7+1/2*exp(t)*6^(1/2)*(2/21*6^(1/2)*cos(6^(1/2)*t)+4/7*sin(6^(1/2)*t))

true

>

That these solutions satisfy the system of differential equations is established with

> q5 := simplify( eval( convert(sys1,list), ivp_soln ) );

> map( evalb, q5 );

q5 := [-2/3*exp(t)*sin(6^(1/2)*t)*6^(1/2) = -2/3*exp(t)*sin(6^(1/2)*t)*6^(1/2), 2*exp(t)*cos(6^(1/2)*t) = 2*exp(t)*cos(6^(1/2)*t)]

[true, true]

>

and that they satisfy the initial condition is established with

> q6 := eval( ivp_soln, t=0 );

> map( evalb, eval( IC, q6 ) );

q6 := {x(0) = 0, y(0) = 1}

[true, true]

>

11.C-2 Eigenvalue Analysis

If the constant-coefficient, first-order linear system of ODEs is written in the matrix form

X' = A X + b

where X = X(t) is the vector of unknown functions, X' = d/dt X is the vector of derivatives of the unknown functions, A is the coefficient matrix, and b is the non-homogeneous ("forcing") vector, the solution can be constructed from the eigenvalues and eigenvectors (eigenpairs) of the matrix A .  For example, the matrix form of the system

`x'`(t) = 2*x+3*y

`y'`(t) = 5*x-7*y

would be

X' = MATRIX([[`x'`], [`y'`]]) = MATRIX([[2, 3], [5, -7]]) MATRIX([[x], [y]])

An eigenpair for a matrix A consists of a scalar lambda and a direction (expressed by an eigenvector V) satisfying the defining equation

A V = lambda V

Thus, the eigenvector represents a direction that is invariant under multiplication by the matrix A .  At most, a vector in this direction has its length changed by the scale factor lambda , called the eigenvalue.

If this defining equation is rewritten in the form

``(A-lambda*I) V = 0

where I is the 2 x 2 identity matrix MATRIX([[1, 0], [0, 1]]) ,  then the eigenvalues are found as the roots of the characteristic equation, that is, the equation det(A-lambda*I) = 0 .

If the eigenpairs are (lambda[1] , V1) and (lambda[2] , V2), then the general solution of the system is given by the sum

X[g] = c[1]*exp(lambda[1]*t) V1 + c[2]*exp(lambda[2]*t) V2

where c[k], k = 1, 2 , are arbitrary constants.

The examples below detail how the solution of the linear system can be constructed from the eigenpairs of the system matrix A .

>

Example 1: Real and Distinct Eigenvalues

Consider the system of ODEs

> sys2 := [ diff( x(t), t ) = x(t) + y(t), diff( y(t), t ) = x(t) + y(t) ];

sys2 := [diff(x(t), t) = x(t)+y(t), diff(y(t), t) = x(t)+y(t)]

>

Let the vector of unknown functions be

> X := < x(t), y(t) >;

X := Vector[column]([[x(t)], [y(t)]])

>

The coefficient matrix A , and forcing vector b can be extracted via the GenerateMatrix command from the LinearAlgebra package:

> (Aneg,b) := GenerateMatrix( eval(sys2,[x(t)=_x,y(t)=_y]), [_x,_y] ):

> A := -Aneg;

A := Matrix([[1, 1], [1, 1]])

>

Thus, the system can be expressed in vector form as

> map(diff, X, t ) = A.X + b;

Vector[column]([[diff(x(t), t)], [diff(y(t), t)]]) = Vector[column]([[x(t)+y(t)], [x(t)+y(t)]])

>

The general solution to this system of ODEs is

> gen_soln := dsolve( sys2, {x(t),y(t)} );

gen_soln := {y(t) = _C2*exp(2*t)-_C1, x(t) = _C1+_C2*exp(2*t)}

>

which, when written in vector form

> X_soln := eval( X, gen_soln );

X_soln := Vector[column]([[_C1+_C2*exp(2*t)], [_C2*exp(2*t)-_C1]])

>

and separated into terms involving at most one of the two constants of integration, can be written as

> X1 := map( coeff, X_soln, _C1 ):

> X2 := map( coeff, X_soln, _C2 ):

> Xp := eval( X_soln, [_C1=0,_C2=0] ):

> X = _C1 * X1 + _C2 * X2 + Xp;

Vector[column]([[x(t)], [y(t)]]) = Vector[column]([[_C1+_C2*exp(2*t)], [_C2*exp(2*t)-_C1]])

>

The two vectors

> V1 := X1;
V2 := eval(X2, t=0);

V1 := Vector[column]([[1], [-1]])

V2 := Vector[column]([[1], [1]])

>

are the eigenvectors corresponding, respectively, to the eigenvalues

> lambda[1] = 0;
lambda[2] = 2;

lambda[1] = 0

lambda[2] = 2

>

The eigenvalues are the roots of the characteristic equation

> CharacteristicPolynomial(A,lambda) = 0;

lambda^2-2*lambda = 0

>

which is Maple's built-in command for obtaining the equivalent of the equation det(A-lambda*I) = 0 .

That each eigenpair satisfied an equation of the form

A V = lambda V

is established with the following computations:

A V1 = MATRIX([[1, 1], [1, 1]]) MATRIX([[1], [-1]]) = MATRIX([[0], [0]]) = lambda[1] V1 = 0 MATRIX([[1], [-1]]) = MATRIX([[0], [0]])   

A V2 = MATRIX([[1, 1], [1, 1]]) MATRIX([[1], [1]]) = MATRIX([[2], [2]]) = lambda[2] V2 = 2 MATRIX([[1], [1]]) = MATRIX([[2], [2]])  

The eigenpairs of a 2 x 2 matrix can generally be found by manual computation.  However, the calculations can become tedious, and are often prone to arithmetic errors.  Instead, we will rely on Maple's Eigenvectors command, the output of which must be carefully deconstructed to extract the eigenvalues and eigenvectors.

Application of that command here yields

> L,V := Eigenvectors(A);

L, V := Vector[column]([[0], [2]]), Matrix([[-1, 1], [1, 1]])

>

Observe that this assignment creates both a vector containing the eigenvalues and the corresponding matrix of eigenvectors.  It is not difficult to extract the eigenvalues (lambda[i] ) and corresponding eigenvectors (E[i] ).  These eigenpairs, as well as the eigensolutions exp(lambda[i]*t) E[i] , are obtained and printed via

> for i from 1 to Dimension(L) do

>  e_val[i] := L[i];

>  e_vec[i] := Column(V,i);

>  e_sol[i] := exp( e_val[i]*t ) * e_vec[i];

>  print( lambda[i]=e_val[i], E[i]=e_vec[i], XX[i] = e_sol[i] );

> end do:

lambda[1] = 0, E[1] = Vector[column]([[-1], [1]]), XX[1] = Vector[column]([[-1], [1]])

lambda[2] = 2, E[2] = Vector[column]([[1], [1]]), XX[2] = Vector[column]([[exp(2*t)], [exp(2*t)]])

>

Note that the vectors XX[1] and XX[2] are the basis vectors for the linear combination that is the general solution of this homogeneous system.

>

Example 2: Complex Eigenvalues

Returning to the system introduced at the beginning of this lesson, namely,

> EQ1 := select(has,sys1,diff(x(t),t))[1]:
EQ2 := select(has,sys1,diff(y(t),t))[1]:

Sys1 := [EQ1,EQ2];

Sys1 := [diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1]

>

recall that this system is non-homogeneous. In fact, the coefficient matrix and forcing term are

> (Aneg,b) := GenerateMatrix( eval(Sys1,[x(t)=_x,y(t)=_y]), [_x,_y] ):

> A := -Aneg;

A := Matrix([[1, -2], [3, 1]])

>

As seen previously, the general solution of this system is

> gen_soln := dsolve( sys1, {x(t),y(t)} );

gen_soln := {x(t) = -4/7+exp(t)*(sin(6^(1/2)*t)*_C2+cos(6^(1/2)*t)*_C1), y(t) = 5/7+1/2*exp(t)*6^(1/2)*(-_C2*cos(6^(1/2)*t)+_C1*sin(6^(1/2)*t))}

>

which, when written in vector form and factored, appears as

> X_soln := eval( X, gen_soln );

X_soln := Vector[column]([[-4/7+exp(t)*(sin(6^(1/2)*t)*_C2+cos(6^(1/2)*t)*_C1)], [5/7+1/2*exp(t)*6^(1/2)*(-_C2*cos(6^(1/2)*t)+_C1*sin(6^(1/2)*t))]])

>

Since this system is non-homogeneous, the particular solution is non-trivial. In this case, one particular solution is

> Xp := eval( X_soln, [_C1=0,_C2=0] );

Xp := Vector[column]([[(-4)/7], [5/7]])

>

and a basis for the solution of the corresponding homogeneous system is

> X1 := map( coeff, X_soln, _C1 );

> X2 := map( coeff, X_soln, _C2 );

X1 := Vector[column]([[exp(t)*cos(6^(1/2)*t)], [1/2*exp(t)*sin(6^(1/2)*t)*6^(1/2)]])

X2 := Vector[column]([[exp(t)*sin(6^(1/2)*t)], [-1/2*exp(t)*6^(1/2)*cos(6^(1/2)*t)]])

>

This leads to the following vector form for the general solution of this system

> evalm(X) = _C1 * evalm(X1) + _C2 * evalm(X2) + evalm(Xp);

vector([x(t), y(t)]) = _C1*vector([exp(t)*cos(6^(1/2)*t), 1/2*exp(t)*sin(6^(1/2)*t)*6^(1/2)])+_C2*vector([exp(t)*sin(6^(1/2)*t), -1/2*exp(t)*6^(1/2)*cos(6^(1/2)*t)])+vector([(-4)/7, 5/7])

>

To understand the relationship between the general solution and the eigenvalue decomposition of the coefficient matrix, consider

> L,V := Eigenvectors( A );

L, V := Vector[column]([[1+I*6^(1/2)], [1-I*6^(1/2)]]), Matrix([[1/3*I*6^(1/2), -1/3*I*6^(1/2)], [1, 1]])

>

The eigenvalues, and hence the eigenvalues, appear in complex conjugate pairs.  These complex conjugate eigenpairs and the products exp(lambda[i]*t) E[i] are

> for i from 1 to Dimension(L) do

>  e_val[i] := L[i];

>  e_vec[i] := Column(V,i);

>  e_sol[i] := exp( e_val[i]*t ) * e_vec[i];

>  print( lambda[i]=e_val[i], E[i]=e_vec[i], XX[i]=e_sol[i] );

> od:

lambda[1] = 1+I*6^(1/2), E[1] = Vector[column]([[1/3*I*6^(1/2)], [1]]), XX[1] = Vector[column]([[1/3*I*exp((1+I*6^(1/2))*t)*6^(1/2)], [exp((1+I*6^(1/2))*t)]])

lambda[2] = 1-I*6^(1/2), E[2] = Vector[column]([[-1/3*I*6^(1/2)], [1]]), XX[2] = Vector[column]([[-1/3*I*exp((1-I*6^(1/2))*t)*6^(1/2)], [exp((1-I*6^(1/2))*t)]])

>

The vectors XX[1] and XX[2] are solutions to the system of ODEs, but they are complex-valued. Real-valued solutions, such as the ones returned by dsolve , would be more useful.

By Euler's formula, if alpha and beta are real numbers, then

 exp(alpha+beta*I) = exp(alpha)*(cos(beta)+I*sin(beta))

This is how the complex exponentials in XX[1] and XX[2] can be simplified.  However, note that the sum and difference of the complex-conjugate pair

z[1] = a+b*i

z[2] = a-b*i

are respectively,

z[1]+z[2] = 2*a

z[1]-z[2] = 2*i*b

Hence, the real and imaginary parts of z[1] are respectively

a = (z[1]+z[2])/2

b = (z[1]-z[2])/(2*i)

Linear combinations of solutions to a linear equation are again solutions of the equation.  This is the principle of superposition, which says nothing more than that if U and V are solutions of X' = A X, then W = a U + b V is also a solution, as the followng calculation verifies.

W' = (a U + b V)' = a U' + b V' =  a A U + b A V = A (a U + b V) = A W

(See also Section 19.B in Lesson 19.)

Thus, the real and imaginary parts of a single complex solution are themselves distinct (real) solutions.  The real and imaginary parts of XX[1] can be obtained via

> U := map( evalc@Re, e_sol[1] );

> V := map( evalc@Im, e_sol[1] );

U := Vector[column]([[-1/3*exp(t)*sin(6^(1/2)*t)*6^(1/2)], [exp(t)*cos(6^(1/2)*t)]])

V := Vector[column]([[1/3*exp(t)*6^(1/2)*cos(6^(1/2)*t)], [exp(t)*sin(6^(1/2)*t)]])

>

These functions form a real-valued basis for the general solution of the homogeneous system. (These solutions may appear to differ from the ones reported by dsolve ; closer inspection reveals that these vectors are, at worst, parallel to the ones found by dsolve and so have the same linear span.)

>

Example 3: Repeated Eigenvalues

For a final example, consider the homogeneous system

> sys3 := [ diff( x(t), t ) = x(t) - 2*y(t), diff( y(t), t ) = y(t) ];

sys3 := [diff(x(t), t) = x(t)-2*y(t), diff(y(t), t) = y(t)]

>

which has coefficient matrix and forcing vector

> (Aneg,b) := GenerateMatrix( eval(sys3,[x(t)=_x,y(t)=_y]), [_x,_y] ):

> A := -Aneg;

A := Matrix([[1, -2], [0, 1]])

>

The general solution to the system, as reported by dsolve , is

> infolevel[dsolve] := 3:

> gen_soln := dsolve( sys3, {x(t),y(t)} );

> infolevel[dsolve] := 0:

`-> Solving each unknown as a function of the next ones using the order:\
[x(t), y(t)]`

`Methods for first order ODEs:`

`--- Trying classification methods ---`

`trying a quadrature`

`trying 1st order linear`

`<- 1st order linear successful`

`Methods for first order ODEs:`
`--- Trying classification methods ---`

`trying a quadrature`

`trying 1st order linear`

`<- 1st order linear successful`

gen_soln := {y(t) = _C2*exp(t), x(t) = -(2*_C2*t-_C1)*exp(t)}

>

A particular solution to the system is the trivial solution

> X_soln := eval( X, gen_soln ):

> Xp := eval( X_soln, [_C1=0,_C2=0] );

Xp := Vector[column]([[0], [0]])

>

Note that any values for _C1 and _C2 will yield a particular solution; using zero for all constants of integration is the easiest and most common choice.

A basis for the homogeneous solution is formed by the pair of functions

> X1 := map( coeff, X_soln, _C1 );

> X2 := map( coeff, X_soln, _C2 );

X1 := Vector[column]([[exp(t)], [0]])

X2 := Vector[column]([[-2*t*exp(t)], [exp(t)]])

>

When these pieces are assembled, the general solution of the system can be written in the form

> X = _C1 * X1 + _C2 * X2 + Xp;

Vector[column]([[x(t)], [y(t)]]) = Vector[column]([[-2*exp(t)*_C2*t+exp(t)*_C1], [_C2*exp(t)]])

>

Notice that both terms in the homogeneous solution involve the same exponential, exp(t) .  After factoring the exponential, one of the homogeneous terms has a coefficient that is not constant.  It is a function of t .  These features will have to be explained during the eigenvalue decomposition which starts as usual

> L,V := Eigenvectors( A );

L, V := Vector[column]([[1], [1]]), Matrix([[1, 0], [0, 0]])

>

It is not difficult to see that lambda = 1 is an eigenvalue of the coefficient matrix.  The fact that this eigenvalue appears twice in the list of eigenvalues means its algebraic multiplicity of lambda = 1 is 2.  The geometric multiplicity is only 1 because there is only one linearly dependent eigenvector.

In more complicated situations it is often easier to work with this information in a different form.

> e_decomp := Eigenvectors( A, output=list );

e_decomp := [[1, 2, {Vector[column]([[1], [0]])}]]

The new form collects all information for each eigenvalue in a separate list.  Each sublist containing three entries.  The first entry in each sublist is an eigenvalue.  The second entry is the algebraic multiplicity of the corresponding eigenvalue.  The algebraic multiplicity is the number of times the eigenvalue is a root of the characteristic equation.  The third entry in each sublist is a set.  This set contains all eigenvectors that belong to the eigenvalue at the beginning of the corresponding list.  The number of eigenvectors that correspond to an eigenvalue is called the geometric multiplicity of the eigenvalue.

>

Extraction of the one eigenpair is implemented via

> for i from 1 to nops(e_decomp) do

>  e_val[i] := e_decomp[i][1];

>  e_vec[i] := e_decomp[i][3][1];

>  e_sol[i] := exp( e_val[i]*t ) * e_vec[i];

>  print( lambda[i]=e_val[i], E[i]=e_vec[i], XX[i]=e_sol[i] );

> od:

lambda[1] = 1, E[1] = Vector[column]([[1], [0]]), XX[1] = Vector[column]([[exp(t)], [0]])

>

Because lambda = 1 is an eigenvalue with algebraic multiplicity 2 and geometric multiplicity 1, the eigenvalue decomposition yields only one solution to the homogeneous equation. This solution, namely,

> XX[1] = e_sol[1];

XX[1] = Vector[column]([[exp(t)], [0]])

>

is referred to as a straight-line solution. The second solution for the basis of the homogeneous solution will be found in the form

XX[2] = t X[1] + exp(lambda[1]*t) V[2]

and obtained in Maple as

> unassign('V2');
sol2_form := exp(e_val[1]*t) * (t*e_vec[1]+V2);

sol2_form := exp(t)*(V2+Vector[column]([[t], [0]]))

>

where

> V2 := <x2,y2>;

V2 := Vector[column]([[x2], [y2]])

>

That is, assume

> r1 := equate( X, sol2_form );

r1 := {x(t) = exp(t)*(x2+t), y(t) = exp(t)*y2}

>

Now, substitution of the proposed solution into the (homogeneous) system of ODEs yields

> sol2_requires := eval( sys3, r1 );

sol2_requires := [exp(t)*(x2+t)+exp(t) = exp(t)*(x2+t)-2*exp(t)*y2, exp(t)*y2 = exp(t)*y2]

>

Note that the second condition is trivially satisfied for all values of y2. However, the first condition is satisfied only when

> sol2_satisfied := solve( identity(sol2_requires[1],t), {x2,y2} );

sol2_satisfied := {x2 = x2, y2 = (-1)/2}

>

This leads to the one-parameter family of solutions

> sol2_family := eval( sol2_form, sol2_satisfied );

sol2_family := Vector[column]([[exp(t)*(x2+t)], [-1/2*exp(t)]])

>

Notice that the component of the solution that depends on the remaining parameter

> map( coeff, sol2_family, x2 );

Vector[column]([[exp(t)], [0]])

>

is a multiple of the first solution to the homogeneous system

> e_sol[1];

Vector[column]([[exp(t)], [0]])

>

and so is not needed again in the basis. Therefore, the second basis solution for the homogeneous solution is chosen to be

> e_sol[2] := eval( sol2_family, x2=0 );

e_sol[2] := Vector[column]([[t*exp(t)], [-1/2*exp(t)]])

>

To summarize, the basis of homogeneous solutions found by this method is

> { e_sol[1], e_sol[2] };

{Vector[column]([[exp(t)], [0]]), Vector[column]([[t*exp(t)], [-1/2*exp(t)]])}

>

and the basis of solutions found by inspection of the dsolve solution is

> { X1, X2 };

{Vector[column]([[-2*t*exp(t)], [exp(t)]]), Vector[column]([[exp(t)], [0]])}

>

The spans of these bases are easily seen to be identical.

As a final note, observe that this system is decoupled. The ODE for y(t) is independent of x(t) , as seen from

> ode_x,ode_y := selectremove( has, sys3, diff(x(t),t) );

ode_x, ode_y := [diff(x(t), t) = x(t)-2*y(t)], [diff(y(t), t) = y(t)]

>

Thus, the second equation can be solved for y(t) without knowledge of x(t) .  This gives

> sol_y := dsolve( ode_y[1], y(t), [linear] );

sol_y := y(t) = exp(t)*_C1

>

Now, since y(t) is known, this result can be substituted into the ODE for x(t) , giving

> ode_x2 := eval( ode_x, sol_y );

ode_x2 := [diff(x(t), t) = x(t)-2*exp(t)*_C1]

>

and solved for x(t) , giving

> sol_x := dsolve( ode_x2[1], x(t) );

sol_x := x(t) = (-2*_C1*t+_C2)*exp(t)

>

The result is these calculations is

> subs(sol_x,sol_y, X);

Vector[column]([[(-2*_C1*t+_C2)*exp(t)], [exp(t)*_C1]])

>

which is equivalent to any of the solutions obtained above.

> X_soln;

Vector[column]([[-(2*_C2*t-_C1)*exp(t)], [_C2*exp(t)]])

>

11.D Numerical Solutions

The graphical solutions produced by DEplot are obtained using a numerical approximation to the solution. The numerical method used to compute the approximations can be specified via the method= argument (the default is method=classic[rk4] ). The dsolve command, with type=numeric , can be used to obtain direct access to a numerical solution to an initial value problem.

For the initial value problem

> IVP;

{x(0) = 0, diff(x(t), t) = x(t)-2*y(t)+2, diff(y(t), t) = 3*x(t)+y(t)+1, y(0) = 1}

>

a procedure for the numerical approximation to the solution via Euler's method is obtained with

> soln_euler := dsolve( IVP, [x(t),y(t)], type=numeric, method=classical );

soln_euler := proc (x_classical) local res, data, vars, solnproc, outpoint, ndsol, i; option `Copyright (c) 2000 by Waterloo Maple Inc. All rights reserved.`; _EnvDSNumericSaveDigits := Digits; Digits...

>

The approximate solution at t = 1 is

> soln_euler(1);

[t = 1., x(t) = -2.18551611270728906, y(t) = 1.36240745623086278]

>

The odeplot command can be used to create a phase portrait for the solution, and results in Figure 11.12.

> odeplot( soln_euler, [x(t),y(t)], 0..4, title="Figure 11.12" );

[Plot]

>

Alternatively, a plot of the individual components of the solution can also be obtained with the odeplot command, as shown in Figure 11.13.

> odeplot( soln_euler, [[t,x(t)],[t,y(t)]], 0..4,

>         labels=[`t`,``], legend=[`x(t)`,`y(t)`], title="Figure 11.13" );

[Plot]

>

See also the discussion in Section 1.C for additional details and options for working with Maple-generated numerical solutions to an IVP.

>

[Back to ODE Powertool Table of Contents]

>