Application Center - Maplesoft
Our website is currently undergoing maintenance, which may result in occasional errors while browsing. We apologize for any inconvenience this may cause and are working swiftly to restore full functionality. Thank you for your patience.

App Preview:

Dynamics of arthropod predator-prey systems

You can switch back to the summary page by clicking here.

Learn about Maple
Download Application


 

blowflies.mws

The Dynamics of Arthropod Predator-prey Systems

by Prof. Matt Miller, Department of Mathematics, University of South Carolina
email: miller@math.sc.edu

Maple worksheet to accompany Hassell, 1978

Monographs in Population Biology 13 , and the course text Edelstein-Keshet

Mathematical Models in Biology , Birkhauser Mathematics Series, McGraw Hill, New York, 1988.

The first part explores the standard density independent and dependent models. The students are asked to investigate the qualitative effects on the behavior of the populations under modification of relevant parameters. They are also asked to examine the eigenvalues of the linearized system at the equilibrium point, at this point purely numerically. The second part is concerned with a more general (algebraic) eigenvalue analysis of both the original and the desity dependent models. After linearization eigenvalues are computed and interpreted, with the goal of understanding how behaviour of the model is affected by parameter values. There is a (we think rather nice!) demonstration that Maple can handle parameter space plots. The last part investigates the stability analysis from a graphical pont of view, giving an idication why the jacobian matrix is used.

Section 1a. Density independent model

Section 1b. Density dependent model

Enter values of growth rate r and density dependence parameter q. The program will generate plots of populations vs. time and phase plots. It also will calculate the eigenvalues of the linearized equations around the equilibrium values of host and parasitoid for the particular values of r and q chosen.

> restart; with(plots): with(linalg):

> r:=1.5; q:=0.6; c:=1.0; a:=0.2; # parameter values

Warning, the name changecoords has been redefined

Warning, the protected names norm and trace have been redefined and unprotected

r := 1.5

q := .6

c := 1.0

a := .2

> txtr:=convert(evalf(r),string): txtq:=convert(evalf(q),string):
labelr:=`r = `: labelq:=`, q = `: fulltitle:= cat(labelr,txtr,labelq,txtq):

> P_hat:= r/a*(1-q); N_hat:= P_hat/(1-exp(-a*P_hat)); K:= N_hat/q; # equilibria

P_hat := 3.000000000

N_hat := 6.649107646

K := 11.08184608

> N[0]:= 11; P[0]:= 4; # initial conditions

N[0] := 11

P[0] := 4

> for i from 0 to 200 do
N[i+1]:=N[i]* exp(r * (1 - N[i]/K) - a*P[i]);
P[i+1]:=c*N[i]*(1 - exp(-a*P[i]));
od:

> plot({[[t,P[t]] $t = 0.. 200], [[t,N[t]] $t = 0..200] }, style=line, title=`fulltitle`);

[Maple Plot]

> plot([[N[t],P[t]] $ t=0..200], style=line, title=cat(`Phase portrait for `,`fulltitle`));

[Maple Plot]

>


Compare the results of these simulations to Figure 2.7 in Hassell. Now go back and try this again with new values for r and q. You should use the Hassell reprint as a guide to pick values that yield qualitatively different behaviors.

Section 1c. Equilibrium analysis


To calculate the eigenvalues of the linearized system, we recast the equations in more symbolic terms, converting N(t) to N and P(t) to P. We then calculate the partial derivatives of each equation with respect to both N and P. This can be done by hand, as done by calculating a11, a12, a21, a22. Alternatively, the jacobian command will do the calculation directly. Once the matrix is assembled, the eigenvals command is used to calclulate the eigenvalues.

Remember that since this is a difference equation model, stability occurs (possibly after damped oscillation) when the eigenvalues are less than 1 in absolute value. Oscillations occur when there are imaginary components to the eigenvalues.

> r:=1.5; q:=0.6;

r := 1.5

q := .6

> Host_rate:= N * exp(r * (1 - N/K) - a*P);

> a11:= diff(Host_rate, N); a12:= diff(Host_rate, P);

> A11:= evalf(subs({N=N_hat, P=P_hat}, a11));

> A12:= evalf(subs({N=N_hat, P=P_hat}, a12));

Host_rate := N*exp(1.5-.1353565091*N-.2*P)

a11 := exp(1.5-.1353565091*N-.2*P)-.1353565091*N*ex...

a12 := -.2*N*exp(1.5-.1353565091*N-.2*P)

A11 := .1000000004

A12 := -1.329821529

> Parasitoid_rate:= c * N* (1 - exp(-a*P));

> a21:=diff(Parasitoid_rate, N); a22:=diff(Parasitoid_rate, P);

> A21:= evalf(subs({N=N_hat, P=P_hat}, a21));

> A22:= evalf(subs({N=N_hat, P=P_hat}, a22));

> A:= matrix( 2, 2, [ [A11, A12], [A21, A22] ] );

Parasitoid_rate := 1.0*N*(1-exp(-.2*P))

a21 := 1.0-1.0*exp(-.2*P)

a22 := .20*N*exp(-.2*P)

A21 := .4511883639

A22 := .7298215291

A := matrix([[.1000000004, -1.329821529], [.4511883...


Here is the alternate method for computing the coefficient matrix using the jacobian command.

> J:= jacobian([Host_rate, Parasitoid_rate], [N, P]); # computes the partials and forms the matrix

J := matrix([[exp(1.5-.1353565091*N-.2*P)-.13535650...

> A:= evalf(subs({N=N_hat, P=P_hat}, evalm(J))); # same as before

A := matrix([[.1000000004, -1.329821529], [.4511883...

> eigens:= eigenvals(A);

eigens := .4149107647+.7076942917*I, .4149107647-.7...

> abs(eigens[1]); # this gives the magnitude ("absolute value") of the first of the two eigenvalues

.8203548946

>

How do you interpret the behavior of the system if these are the eigenvalues? Go back to the beginning of this part of the worksheet, reset r and q, and see if the eigenvalue calculations lead to a different conclusion.

Section 2. Eigenvalue analysis -- dependence on parameters

We exhibit dependence on parameters of the behaviour of the system near the equilibrium point in both the original and density dependent Nicholson-Bailey models. In the original model, host growth is unconstrained in the absence of parasitoids; the rate is lambda. We use the coefficient matrix A obtained by linearization at the equilibrium (see text E-K, page 82).

> restart: with(plots): with(linalg):

Warning, the name changecoords has been redefined

Warning, the protected names norm and trace have been redefined and unprotected

> c:= 1; # to get formal rather than floating point calculation of the eigenvalues

> mu:= 1 - 1 / lambda; b:= c * mu;

> A:= matrix( 2 , 2, [ [1, -ln(lambda)/b], [b, ln(lambda)/mu] ]);

c := 1

mu := 1-1/lambda

b := 1-1/lambda

A := matrix([[1, -ln(lambda)/(1-1/lambda)], [1-1/la...

It is easy to get the eigenvalues, but watch out, they could be complex numbers.

> evalues:= eigenvals(A);

evalues := 1/2*(lambda+ln(lambda)*lambda-1+sqrt(lam...
evalues := 1/2*(lambda+ln(lambda)*lambda-1+sqrt(lam...

These are a big ugly mess, at least inside the square root. You can actually do a good bit of housecleaning (by hand) to see that the first one at least has to have a real part that is bigger than 1, plus an imaginary part. This is already enough to demonstrate instabilty of the equilibrium point. However, to get an idea what is going on, I decided to substitute some values for lambda. Remember evalf converts to decimal form. We can also compute the magnitudes of these complex numbers, but it is clear that they are bigger than 1. We begin by computing the the eigenvalue magnitude that corresponds to a small reproductive rate lambda; then we try larger and larger reproductive rates.

> evalf(subs( lambda=1.001, [evalues] ));

[1.000249917+.3161388540e-1*I, 1.000249917-.3161388...

> abs( % [1] ); # take the first eigenvalue and compute its magnitude (both have the same magnitude)

1.000749386

> evalf( subs( lambda=1.1, [evalues] ) ); abs( % [1] );

[1.024205989+.3077730495*I, 1.024205989-.3077730495...

1.069449465

> evalf( subs( lambda=1.5, [evalues] ) ); abs( % [1] );

[1.108197662+.6275016922*I, 1.108197662-.6275016922...

1.273522843

> evalf( subs( lambda=2.0, [evalues] ) ); abs( % [1] );

[1.193147181+.8098403220*I, 1.193147181-.8098403220...

1.442026887

> evalf( subs( lambda=10, [evalues] ) ); abs( % [1] );

[1.779213941+1.302079386*I, 1.779213941-1.302079386...

2.204770504

> evalf( subs( lambda=1000, [evalues] ) ); abs( % [1] );

[5.313091173, 2.601578775]

5.313091173


Apparently in every case (except for the last one) we get trajectories in the phase plane that spiral away from the equilibrium. Since spiralling in the phase plane (parasite population against host population) represents oscillation of both populations as functions of time, what this amounts to is bigger and bigger oscillations in the populations--just what you observed in Figure 3.3!

Let's try to analyze the eigenvalues abstractly. A little algebraic hacking around shows that if you bring the denominator inside the square root, and do some rearranging and factoring, the result is:

> mess:=1-6*ln(lambda)*(lambda-2/3)/(lambda-1) + (ln(lambda))^2 *lambda^2 /(lambda-1)^2;

mess := 1-6*ln(lambda)*(lambda-2/3)/(lambda-1)+ln(l...

Let's plot this as a function of lambda.

> plot( mess, lambda = 1 .. 400 );

[Maple Plot]

As you can see, for large enough lambda the eigenvalues will be real (since mess is positive and has a real square root), but for lambda in any sensible range, the situation that we observed in Figure 3.3 is typical.

Now let's suppose the host population is subject to density dependent growth. To linearize the system we need to compute partial derivatives and evaluate them at the equilibrium values of N and P. We use the right hand side of equations (28) on page 84 of E-K. Recall that ultimately we want to look at the dependence of the eigenvalues on r and q only, so let us substitute the values used in the text: c = 1, K = Nequil / q .

> a_11:= diff( N * exp( r * (1 - N / K ) - a * P) , N) ;

> a_12:= diff( N * exp( r * (1 - N / K ) - a * P) , P );

> a_21:= diff(c * N * ( 1 - exp( -a * P ) ) , N ) ;

> a_22:= diff( c * N * ( 1 - exp( -a * P ) ) , P ) ;

a_11 := exp(r*(1-N/K)-a*P)-N*r*exp(r*(1-N/K)-a*P)/K...

a_12 := -N*a*exp(r*(1-N/K)-a*P)

a_21 := 1-exp(-a*P)

a_22 := N*a*exp(-a*P)

> Pequil:= r / a * ( 1 - q ); Nequil:= Pequil / (1 - exp(- a * Pequil));

> K:= Nequil / q; c:= 1;

Pequil := r*(1-q)/a

Nequil := r*(1-q)/(a*(1-exp(-r*(1-q))))

K := r*(1-q)/(a*(1-exp(-r*(1-q)))*q)

c := 1

> A_11:= subs({N = Nequil, P = Pequil}, a_11);

> A_12:= subs({N = Nequil, P = Pequil}, a_12);
A_21:= subs({N = Nequil, P = Pequil}, a_21);

> A_22:= subs({N = Nequil, P = Pequil}, a_22);

A_11 := exp(0)-r*q*exp(0)

A_12 := -r*(1-q)*exp(0)/(1-exp(-r*(1-q)))

A_21 := 1-exp(-r*(1-q))

A_22 := r*(1-q)*exp(-r*(1-q))/(1-exp(-r*(1-q)))

> A:= matrix( 2, 2, [ [ A_11, A_12] , [A_21, A_22] ]);

A := matrix([[1-r*q, -r*(1-q)/(1-exp(-r*(1-q)))], [...

This gives us the linearization coefficient matrix. Note that the parameter a has disappeared. Next we give an alternative approach using more of Maple's built-in capabilities.

> host_rhs:= N * exp( r * (1 - N / K ) - a * P);

> parasitoid_rhs:= c * N * ( 1 - exp( -a * P ) ) ;

host_rhs := N*exp(r*(1-N*a*(1-exp(-r*(1-q)))*q/(r*(...

parasitoid_rhs := N*(1-exp(-a*P))

> J:= jacobian( [host_rhs, parasitoid_rhs], [N, P] );

J := matrix([[exp(r*(1-N*a*(1-exp(-r*(1-q)))*q/(r*(...
J := matrix([[exp(r*(1-N*a*(1-exp(-r*(1-q)))*q/(r*(...
J := matrix([[exp(r*(1-N*a*(1-exp(-r*(1-q)))*q/(r*(...
J := matrix([[exp(r*(1-N*a*(1-exp(-r*(1-q)))*q/(r*(...

> B:= subs({N=Nequil, P=Pequil}, evalm(J)); # evaluate the Jacobian at the equilibrium

B := matrix([[exp(0)-r*q*exp(0), -r*(1-q)*exp(0)/(1...

>

The same as A? Use whichever syntax you prefer from now on!

>


Once again, we perform an eigenvalue analysis. Substitution of various values of r and q should yield results consitstent with Figure 2.7 of the Hassell handout. If we look in (r, q) space the boundary curves of the regions should be determined by seeing where these eigenvalues have imaginary part equal zero (the internal curve) or magnitude equal to one (the outside boundary).

> evalues:= [eigenvals(A)]; # Note there are two of them, separated by a comma

evalues := [1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q...
evalues := [1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q...
evalues := [1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q...
evalues := [1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q...
evalues := [1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q...

The eigenvalues involve the parameters a, r, and q; let us substitute the value of a given in the text.

> mu:= evalues[1]; eta:= evalues[2] ;

mu := 1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q))+sqr...
mu := 1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q))+sqr...
mu := 1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q))+sqr...

eta := 1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q))-sq...
eta := 1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q))-sq...
eta := 1/2*(-1+exp(r*(-1+q))+r*q-r*exp(r*(-1+q))-sq...


The following procedure examines the eigenvalues and tests to see if the magnitude is greater than or less than one, and whether there is an imaginary component or not. Depending upon which of these conditions is met, the procedure returns a number between 0 and 4. Note that it is essentially impossible to test equality of floating point (decimal) numbers on a computer (you can test if an integer is zero or not, but whether 10E-8 or 10E-10 or 10E-12 is zero will depend on the computer hardware and software). For this reason, we test instead for "close enough" to 0 or 1. The input arguments rr and qq are used to compute the value of the eigenvalue mu.

> behav:= proc(rr, qq)

> ev:=evalf( subs( {r=rr, q=qq}, mu) );
evi:=Im(ev); evmag:=abs(ev);
if evmag > 1 and abs(evi) <10^(-8) then zone:=4; fi; # explosion (red)
if evmag > 1 and evi <>0 then zone:=3; fi; # unstable oscillation (green)
if abs(evmag - 1)<10^(-8) then zone:=2; fi; # stable oscillation (yellow)
if evmag < 1 and evi <> 0 then zone:=1; fi; # damped oscillation (blue)
if evmag < 1 and abs(evi) <10^(-8) then zone:=0; fi;#over-damping (purple)

> result:= zone;

> end:

Warning, `ev` is implicitly declared local to procedure `behav`

Warning, `evi` is implicitly declared local to procedure `behav`

Warning, `evmag` is implicitly declared local to procedure `behav`

Warning, `zone` is implicitly declared local to procedure `behav`

Warning, `result` is implicitly declared local to procedure `behav`

First let's test out the procedure on some sample values.

> behav( 6, 0.65);

4

> behav( 1, 0.3 );

3

> behav( 2, 0.5);

2

> behav( 3 , 0.6 );

1

> behav( 1, 0.9 );

0


Now use the procedure to reproduce Hassell Fig 2.7. The orientation is set so that the view of the 3D plot is from above.

> contourplot3d(behav, 0.1.. 6.0, 0.01.. 0.9, grid=[45,45], orientation = [-90, 0], shading=ZHUE, axes=BOXED, style = PATCHCONTOUR, title=`Stability Regions of Density Dependent N-B: Hassell 2.7`);

[Maple Plot]

Show the imaginary part of mu as a function of r and q.

> Implot:= plot3d(Im(mu), q=0.1.. 0.9, r=0.1.. 6.0, axes=BOXED, shading=ZHUE, style=PATCHCONTOUR, title=`Imaginary Parts of Eigenvalues`):

> display( Implot );

[Maple Plot]

Show the magnitude of mu as a function of r and q.

> Magplot:=plot3d(abs(mu), q=0.1..0.9, r=0.1..6.0, shading=ZHUE, axes=BOXED, style=PATCHCONTOUR, title=`Magnitudes of Eigenvalues`):

> display( Magplot );

[Maple Plot]

Show the real part of mu as a function of r and q.

> Replot:= plot3d(Re(mu), q=0.1..0.9, r=0.1..6.0, shading=ZHUE, axes=BOXED, style=PATCHCONTOUR, title=`Real Parts of Eigenvalues`):

> display( Replot );

[Maple Plot]

Display real and imaginary components together.

> display( { Replot, Implot } , title = `Real and imaginary parts `);

[Maple Plot]

Here is an alternate way to get the boundary curves. Caution: Maple's implicit plotting is impressionistic at best. It is very senstitive to change in mesh fineness, and it can be very slow.

> magnitude:= abs(mu);

> Outer:= implicitplot( magnitude = 1, r=0.1..6.0, q=0.01..0.9, color = plum):

magnitude := 1/2*abs((-1+exp(r*(-1+q))+r*q-r*exp(r*...
magnitude := 1/2*abs((-1+exp(r*(-1+q))+r*q-r*exp(r*...
magnitude := 1/2*abs((-1+exp(r*(-1+q))+r*q-r*exp(r*...

> Inner:= implicitplot( magnitude = Re(mu), r=0.1..6.0, q=0.01..0.9, grid=[50,50], color=green): # Patience!!

> display( {Outer, Inner} );

[Maple Plot]

>

Section 3. Graphical illustration of stability analysis

> restart:gc():

> restart:

>

> eq1:=n=n*lambda*exp(-a*p);

eq1 := n = n*lambda*exp(-a*p)

> eq2:=p=c*n*(1-exp(-a*p));

eq2 := p = c*n*(1-exp(-a*p))

> parms:=a=.068,c=1,lambda=2;

parms := a = .68e-1, c = 1, lambda = 2

> Eq1:=subs(parms,eq1);

Eq1 := n = 2*n*exp(-.68e-1*p)

> Eq2:=subs(parms,eq2);

Eq2 := p = n*(1-exp(-.68e-1*p))

> equil:=solve({Eq1,Eq2},{n,p});

equil := {n = 0., p = 0.}, {p = 10.19334089, n = 20...

>

> with(plots):

Warning, the name changecoords has been redefined

> plot3d(rhs(Eq1),p=0..30,n=0..30,axes=boxed,title=`Host Growth Function`);

[Maple Plot]

The above graph shows the host population at time t+1 as a function of the host and parasitoid populations at time t. This surface is analogous to the growth function lines used in the cobweb analysis of the logistic equation by May and Oster. It is a surface rather than a line, because both hosts and parasitoids influence the growth of the host species.

> plot3d(n,p=0..30,n=0..30,axes=boxed,title=`Host Zero Growth`);

[Maple Plot]

The above plane represents the host population at time t+1 which equals the host population at time t (equivalent to the 45 degree line plotted by May and Oster in their cobweb analysis).

> plot3d(rhs(Eq2),p=0..30,n=0..30,axes=boxed,title=`Parasitoid Growth Function`);

[Maple Plot]

The above graph represents the parasitoid population at time t+1 as a function of host and parasitoid populations at time t. Note that this growth function is very non-linear.

> plot3d(p,p=0..30,n=0..30,axes=boxed,title=`Parasitoid Zero Growth`);

[Maple Plot]

The above graph represents the parasitoid population at t+1 when it equals the population size at time t (equivalent to May and Oster's 45 degree line).

>

The entries in the jacobian matrix j below are the partial derivatives of the host and parasitoid functions with respect to host and parasitoid population density. They represent the slopes of the host and parasitoid growth functions along the host and parasitoid axes.

> with (linalg):

Warning, the protected names norm and trace have been redefined and unprotected

> j:=jacobian([rhs(Eq1),rhs(Eq2)],[n,p]);

j := matrix([[2*exp(-.68e-1*p), -.136*n*exp(-.68e-1...

The jacobian is evaluated at the equilibrium point, and its eigenvalues describe the behavior of the system.

> jj:=subs(op(2,[equil]),evalm(j));

jj := matrix([[2*exp(-.6931471805), -2.772588722*ex...

> ev:=eigenvals(jj);

ev := .8465735903+.8182954951*I, .8465735903-.81829...

The abs function is used to determine the magnitude of the eigenvalues.

> abs(op(1,[ev])),abs(op(2,[ev]));

1.177410022, 1.177410022

In this case, the eigenvalues have magnitudes greater than 1, indicating growth away from the equilibrium. The eigenvalues also have imaginary components indicating oscillatory behavior. This combination indicates expanding oscillations. The parameters used are from Fig 3.3 in Edelstein-Keshet and Fig 2.3 in Hassell 1978.

> op(1,op(2,[equil]));op(2,op(2,[equil]));

p = 10.19334089

n = 20.38668178

We will plot the functions close to the equilibrium listed above:

> h_n:=plot3d(rhs(Eq1),p=rhs(op(1,op(2,[equil])))..rhs(op(1,op(2,[equil])))+.1,n=0..30,axes=boxed):

> h_p:=plot3d(rhs(Eq1),p=0..30,n=rhs(op(2,op(2,[equil])))..rhs(op(2,op(2,[equil])))+.1,axes=boxed):

> display({h_n,h_p},title=`Host Function Near Equilibrium`);

[Maple Plot]

The above figure plots the host growth function along lines parallel to the host and parasitoid axes and passing through the equilibrium: The slopes of these lines are the partial derivatives of the growth function with respect to h and p.

> p_n:=plot3d(rhs(Eq2),p=rhs(op(1,op(2,[equil])))..rhs(op(1,op(2,[equil])))+.1,n=0..30,axes=boxed):

> p_p:=plot3d(rhs(Eq2),p=0..30,n=rhs(op(2,op(2,[equil])))..rhs(op(2,op(2,[equil])))+.1,axes=boxed):

> display({p_n,p_p},title=`Parasitoid Function Near Equilibrium`);

[Maple Plot]

The above figure plots the parasitoid growth function along lines parallel to the host and parasitoid axes and passing through the equilibrium: The slopes of these lines are the partial derivatives of the growth function with respect to h and p.

>