ORDINARY DIFFERENTIAL EQUATIONS POWERTOOL
Lesson 11 -- First-Order Linear Systems
Prof. Douglas B. Meade
Industrial Mathematics Institute
Department of Mathematics
University of South Carolina
Columbia, SC 29208
URL: http://www.math.sc.edu/~meade/
E-mail: meade@math.sc.edu
Copyright 2001 by Douglas B. Meade
All rights reserved
-------------------------------------------------------------------
Outline of Lesson 11
11.A Equilibrium Analysis
11.A-1 Nullclines
11.A-2 Equilibrium Solutions
11.B Graphical Analysis
11.B-1 Direction Fields
11.B-2 Phase Portraits
11.B-3 Solution Curves
11.C Analytic Solutions
11.C-1 One-Step Solutions using dsolve
11.C-2 Eigenvalue Analysis
Example 1 Real and Distinct Eigenvalues
Example 2 Complex Eigenvalues
Example 3 Repeated Eigenvalues
11.D Numerical Solutions
Initialization
Warning, the name changecoords has been redefined
In the -plane, an isocline for the differential equation is a curve along which has a constant value. Thus, an isocline is a level curve of the function .
If the contstant value of on the isocline is zero, then the isocline is called a nullcline.
A nullcline for a two-dimensional first-order system of differential equations is a curve along which at least one of the dependent variables ( or ) does not change, i.e., where or .
The nullclines for the system
are found by graphing the curves defined implicitly by the equations and . While it can be difficult to give a general description of the nullclines for a general nonlinear system, the nullclines for a linear system are always straight lines that intersect at an equilibrium solution.
Consider the general first-order linear system with constant coefficients, namely,
which are rendered in Maple via
The nullclines of the system are the straight lines whose equations are
The change from to and from to is made to simplify the graphical display of the nullclines.
For a concrete example, select the following values of the coefficients.
Then the system is
and the nullclines are
or, expressing each line in slope-intercept form.
These nullclines are graphed in Figure 11.1, created from an implicitplot of each nullcline. (The implicitplot command is needed to handle cases where the nullcline is a vertical line.)
Equilibrium solutions (if they exist) occur at the intersection of nullclines corresponding to each of the differential equations in the system. For a general nonlinear system, special care must be taken to ensure that an intersection point of nullclines is actually an equilibrium solution. Fortunately, for a two-dimensional linear system, any intersection of the nullclines is automatically an equilibrium solution.
In terms of the example introduced above, namely,
with nullclines
the intersection of the nullclines can be approximated from Figure 11.1, the plot of the nullclines. The exact solution is found to be
or, expressed as the coordinates of a point,
Graphically, the equilibrium solution is the intersection of the two nullclines, as shown in Figure 11.2.
The direction field (see Lesson 1) can be a source of much information about a system of ODEs, even when an explicit formula for the solution is not known. For starters, the direction field for a system shows the direction in which the solution moves at any point in space.
Again consider the system
first seen in Lesson 11, Section A . The aim here is to construct and study the direction field for this system.
Even though this is an autonomous system (recall from Lesson 1 that the independent variable does not explicitly appear), Maple's DEplot command requires an interval for the independent variable . The specific interval chosen is irrelevant, provided the endpoints are numeric. Hence, pick the domain
The direction field for this system appears in Figure 11.3.
From this picture it is evident that solutions spiral outward from a point in the second quadrant. To see that the center of these spirals is the equilibrium solution, superimpose the plot of the equilibrium solution and the nullclines, as done in Figure 11.4.
The nullclines show exactly where the individual components pass through critical points and, hence, where they transition between increasing and decreasing functions (of ). In practice, if a computer is not available to create the direction field, the constancy of the sign of the derivative of each component in each region created by the nullclines provides insight into the qualitative behaviour of solutions to a linear, autonomous system of ODEs.
11.B-2 Phase Portrait
A phase portrait for an autonomous system of ODEs displays solutions to the system in "phase space" where solutions and are treated as parametric equations for the curve . (See Lesson 1.)
To create a phase portrait in Maple, the DEplot command is recommended. (The phaseportrait command, also from the DEtools package, is essentially the same, but there is no reason to learn and remember a new command when simple modifications to DEplot suffice.)
One solution curve is generated for each initial condition. For a two-dimensional system in and , each initial condition can be specified either as triples of numbers or as pairs of equations . For example, initial conditions at the points with integer coordinates along the axes can be specified as
for the -axis,
for the -axis, and
as a composit list for both axes.
As in the previous uses of DEplot, an interval of values for the independent variable is required. Unlike the use of DEplot for the creation of direction fields, this argument does have a meaning for phase portraits. The time interval provides limits for the numerical methods used to obtain approximate solutions for each initial condition.
Using the domain
for the independent variable, the phase portrait for this system and these initial conditions appears in Figure 11.5.
To restrict the solutions to a pre-determined "window", include a viewing window such as
This gives Figure 11.6.
A simple change to the arrows= option includes the direction field in the phase portrait, as seen in Figure 11.7.
In Figure 11.8, this plot is superimposed on the direction field, nullclines, and equilibrium solution.
Even though the the independent variable is not explicitly displayed in a phase portrait, it is implicitly present in that the solution curves are traveled at different speeds. To show the coordination between solutions from different initial conditions, specify the linecolor= option with a function or expression that depends on the independent variable. For example, this is done in Figure 11.9.
Note that it is not necessary to be too concerned about the specific range of values of the linecolor= option; DEplot automatically normalizes these values to the interval [0,1].
Figure 11.9 animates the tracing of the trajectories in the phase portrait.
Note that to provide a consistent resolution for all solution curves in the animation, the stepsize= option has been used.
For the two-dimensional linear system, a solution curve is a plot of one component of the solution as a function of the independent variable and is produced using the DEplot command. The only changes to the arguments are removing the specification of the display window size and the arrows= argument, and changing the scene= argument to specify the component to be plotted.
For example, when the initial condition is
Figure 11.10, containing graphs of the - and -components of the particular solution satisfying this initial condition, is created and displayed with
If multiple initial conditions are specified as in
one solution curve is produced for each initial condition. The individual plots are created with
and can be displayed together, as in Figure 11.11.
To eliminate some of the clutter from the multiple plots, it is sometimes advantageous to display each pair of solutions in side-by-side plots, as seen in the following figure.
Note how the colors are used to indicate pairs of solutions corresponding to the same initial condition. The blue curve on the left is the graph of whereas the blue curve on the right is the graph of both of which correspond to the first initial condition. The pair of green curves corresponds to the second initial condition.
For a first-order linear system of ODEs, Maple's dsolve command should be able to find the general solution to the system and the particular solution for any initial condition.
For example, for the system of ODEs
the general solution is
more easily grasped if written as
The solution satisfying an initial condition, say
can be found by substituting the initial condition into the general solution, thereby producing the equations
which can then be solved for the constants of integration, namely,
The corresponding solution of this system of differential equations is
again more easily grasped if written as
Alternatively, the solution to the initial value problem
could be found with the single command
While the odetest command is unable to check solutions for a system of ODEs, it is not difficult to verify that the two solutions presented above are identical. The equivalence of is established with
whereas that of is established with
That these solutions satisfy the system of differential equations is established with
and that they satisfy the initial condition is established with
If the constant-coefficient, first-order linear system of ODEs is written in the matrix form
X' = A X + b
where X = X(t) is the vector of unknown functions, X' = X is the vector of derivatives of the unknown functions, is the coefficient matrix, and b is the non-homogeneous ("forcing") vector, the solution can be constructed from the eigenvalues and eigenvectors (eigenpairs) of the matrix . For example, the matrix form of the system
would be
X' = =
An eigenpair for a matrix consists of a scalar and a direction (expressed by an eigenvector V) satisfying the defining equation
A V = V
Thus, the eigenvector represents a direction that is invariant under multiplication by the matrix . At most, a vector in this direction has its length changed by the scale factor , called the eigenvalue.
If this defining equation is rewritten in the form
V = 0
where is the 2 x 2 identity matrix , then the eigenvalues are found as the roots of the characteristic equation, that is, the equation .
If the eigenpairs are ( , V1) and ( , V2), then the general solution of the system is given by the sum
= V1 + V2
where , are arbitrary constants.
The examples below detail how the solution of the linear system can be constructed from the eigenpairs of the system matrix .
Example 1: Real and Distinct Eigenvalues
Consider the system of ODEs
Let the vector of unknown functions be
The coefficient matrix , and forcing vector b can be extracted via the GenerateMatrix command from the LinearAlgebra package:
Thus, the system can be expressed in vector form as
The general solution to this system of ODEs is
which, when written in vector form
and separated into terms involving at most one of the two constants of integration, can be written as
The two vectors
are the eigenvectors corresponding, respectively, to the eigenvalues
The eigenvalues are the roots of the characteristic equation
which is Maple's built-in command for obtaining the equivalent of the equation .
That each eigenpair satisfied an equation of the form
is established with the following computations:
A V1 = = = V1 = 0 =
A V2 = = = V2 = 2 =
The eigenpairs of a 2 x 2 matrix can generally be found by manual computation. However, the calculations can become tedious, and are often prone to arithmetic errors. Instead, we will rely on Maple's Eigenvectors command, the output of which must be carefully deconstructed to extract the eigenvalues and eigenvectors.
Application of that command here yields
Observe that this assignment creates both a vector containing the eigenvalues and the corresponding matrix of eigenvectors. It is not difficult to extract the eigenvalues ( ) and corresponding eigenvectors ( ). These eigenpairs, as well as the eigensolutions , are obtained and printed via
Note that the vectors and are the basis vectors for the linear combination that is the general solution of this homogeneous system.
Example 2: Complex Eigenvalues
Returning to the system introduced at the beginning of this lesson, namely,
recall that this system is non-homogeneous. In fact, the coefficient matrix and forcing term are
As seen previously, the general solution of this system is
which, when written in vector form and factored, appears as
Since this system is non-homogeneous, the particular solution is non-trivial. In this case, one particular solution is
and a basis for the solution of the corresponding homogeneous system is
This leads to the following vector form for the general solution of this system
To understand the relationship between the general solution and the eigenvalue decomposition of the coefficient matrix, consider
The eigenvalues, and hence the eigenvalues, appear in complex conjugate pairs. These complex conjugate eigenpairs and the products are
The vectors and are solutions to the system of ODEs, but they are complex-valued. Real-valued solutions, such as the ones returned by dsolve , would be more useful.
By Euler's formula, if and are real numbers, then
This is how the complex exponentials in and can be simplified. However, note that the sum and difference of the complex-conjugate pair
are respectively,
Hence, the real and imaginary parts of are respectively
Linear combinations of solutions to a linear equation are again solutions of the equation. This is the principle of superposition, which says nothing more than that if U and V are solutions of X' = A X, then W = U + V is also a solution, as the followng calculation verifies.
W' = ( U + V)' = U' + V' = A U + A V = A ( U + V) = A W
(See also Section 19.B in Lesson 19.)
Thus, the real and imaginary parts of a single complex solution are themselves distinct (real) solutions. The real and imaginary parts of can be obtained via
These functions form a real-valued basis for the general solution of the homogeneous system. (These solutions may appear to differ from the ones reported by dsolve ; closer inspection reveals that these vectors are, at worst, parallel to the ones found by dsolve and so have the same linear span.)
Example 3: Repeated Eigenvalues
For a final example, consider the homogeneous system
which has coefficient matrix and forcing vector
The general solution to the system, as reported by dsolve , is
`-> Solving each unknown as a function of the next ones using the order:\ [x(t), y(t)]` `Methods for first order ODEs:` `--- Trying classification methods ---` `trying a quadrature` `trying 1st order linear` `<- 1st order linear successful`
`Methods for first order ODEs:` `--- Trying classification methods ---` `trying a quadrature` `trying 1st order linear` `<- 1st order linear successful`
A particular solution to the system is the trivial solution
Note that any values for _C1 and _C2 will yield a particular solution; using zero for all constants of integration is the easiest and most common choice.
A basis for the homogeneous solution is formed by the pair of functions
When these pieces are assembled, the general solution of the system can be written in the form
Notice that both terms in the homogeneous solution involve the same exponential, . After factoring the exponential, one of the homogeneous terms has a coefficient that is not constant. It is a function of . These features will have to be explained during the eigenvalue decomposition which starts as usual
It is not difficult to see that is an eigenvalue of the coefficient matrix. The fact that this eigenvalue appears twice in the list of eigenvalues means its algebraic multiplicity of is 2. The geometric multiplicity is only 1 because there is only one linearly dependent eigenvector.
In more complicated situations it is often easier to work with this information in a different form.
The new form collects all information for each eigenvalue in a separate list. Each sublist containing three entries. The first entry in each sublist is an eigenvalue. The second entry is the algebraic multiplicity of the corresponding eigenvalue. The algebraic multiplicity is the number of times the eigenvalue is a root of the characteristic equation. The third entry in each sublist is a set. This set contains all eigenvectors that belong to the eigenvalue at the beginning of the corresponding list. The number of eigenvectors that correspond to an eigenvalue is called the geometric multiplicity of the eigenvalue.
Extraction of the one eigenpair is implemented via
Because is an eigenvalue with algebraic multiplicity 2 and geometric multiplicity 1, the eigenvalue decomposition yields only one solution to the homogeneous equation. This solution, namely,
is referred to as a straight-line solution. The second solution for the basis of the homogeneous solution will be found in the form
= +
and obtained in Maple as
where
That is, assume
Now, substitution of the proposed solution into the (homogeneous) system of ODEs yields
Note that the second condition is trivially satisfied for all values of y2. However, the first condition is satisfied only when
This leads to the one-parameter family of solutions
Notice that the component of the solution that depends on the remaining parameter
is a multiple of the first solution to the homogeneous system
and so is not needed again in the basis. Therefore, the second basis solution for the homogeneous solution is chosen to be
To summarize, the basis of homogeneous solutions found by this method is
and the basis of solutions found by inspection of the dsolve solution is
The spans of these bases are easily seen to be identical.
As a final note, observe that this system is decoupled. The ODE for is independent of , as seen from
Thus, the second equation can be solved for without knowledge of . This gives
Now, since is known, this result can be substituted into the ODE for , giving
and solved for , giving
The result is these calculations is
which is equivalent to any of the solutions obtained above.
The graphical solutions produced by DEplot are obtained using a numerical approximation to the solution. The numerical method used to compute the approximations can be specified via the method= argument (the default is method=classic[rk4] ). The dsolve command, with type=numeric , can be used to obtain direct access to a numerical solution to an initial value problem.
For the initial value problem
a procedure for the numerical approximation to the solution via Euler's method is obtained with
The approximate solution at is
The odeplot command can be used to create a phase portrait for the solution, and results in Figure 11.12.
Alternatively, a plot of the individual components of the solution can also be obtained with the odeplot command, as shown in Figure 11.13.
See also the discussion in Section 1.C for additional details and options for working with Maple-generated numerical solutions to an IVP.
[Back to ODE Powertool Table of Contents]