Application Center - Maplesoft

# Periodic solutions for a first order PDE with periodic forcing function

You can switch back to the summary page by clicking here.

PeriodSum.mws

Periodic Solutions for a First Order Partial Differential Equation
with Periodic Forcing Function

Jim Herod

Georgia Tech, Retired

jherod@tds.net

Summary: This Maple Worksheet provides a method for finding solutions for non-homogeneous partial differential equation of the form

= + , u(t, 0) = 0 = u(t, 1).

More important, if the function F(t,x) is periodic as a function of t, it provides a method for finding a periodic solution for the partial differential equation. Even more. Techniques are used to emphasize a general structure suitable for finding a periodic solution for ordinary differential equations having the same general form of

Y' = AY + F

where F is periodic.

Section 1:Introduction

Statement of the problem

In this discussion, we consider solutions Y for the differential equation

(1.1) Y' = AY + F.

In case A is a number and F is a real valued function, students in an introductory calculus course often can make a solution for this equation. If A is a matrix, students in an introductory ordinary differential equations course likely can formulate a solution. The principal interest here is the beginning of an understanding for how to handle this problem in case A is a differential operator.

We want more than to compute solutions. Our interest is in periodic solutions. We choose a function A in each of the above contexts and suppose that f is a periodic function from R into the space on which A is defined.

We seek periodic solutions for the equation Y' = AY + F.

Different Contexts for the Problem

We illustrate the ideas of the problem we have presented with three examples.

Example 1 : Find a periodic solution for the equation y'(t) = -2 y(t) + sin(t).

Example 2 : Find a periodic solution for the system of equations

x' = -2 x + y + cos(t),

y' = x - 2 y + sin(t),.

We rewrite this Example 2 into the context of our discussion by identifying the matrix

A =

and the function

F(t) = .

With this identification, the system of equations with a forcing function has the form of our general problem:

Y' = AY + F.

Example 3 : Find a periodic solution for the partial differential equation

= + 10 sin(t) , u(t, 0) = 0 = u(t, 1).

To realize this problem in the context of our equation (1.1) and to follow the paradigm we will use to make solutions, we need to be a little careful about identifying A. Note the character of the A in the second example: A was a mapping of the space and, if t is a number, then F(t) is in the domain of A, namely . For the third example we take the space to be twice differentiable functions on the interval [0, 1] with value zero at both ends. Note that if t is a fixed number, then

F(t, x) = 10 sin(t)

as a function of x is in this space. Agreeing that

A(f) =

we have that, for fixed t, F(t, x) is in the domain of A.

What Follows

In the remainder of this discussion, we will recall the derivation of the solution for the problem

Y' = A Y + F, with Y(0) = c.

This solution is often called the variations of parameter solution:

Y(t) = exp(tA) c + exp(t A) .

The use of this formula for calculation of solutions will require an understanding of the exponential function. The exponential of a number can be calculated with a hand held calculator. Thus, solutions for the first example are easy to compute. The exponential of a matrix A such as in the second example is not hard to compute in the Maple environment. We illustrate the details for both these examples.

The exponential of the differentiable operator is more interesting. We obtain this exponential, develop the variations of parameter solution for the partial differential equation with a forcing function, and find the periodic solution for Example 3.

Section 2: A Structure for Periodic Solutions

Solutions for Y'=AY + F

Suppose for this section that exp(tA) can be defined for each of the examples presented in the Section 1: Introduction . We have stated in the introduction that this is not a severe supposition for the reader in the first two examples and in this Maple environment. The promise is that the exponential of the third A will be provided also. We provide a recipe for getting a solution for differential equations which have this general form

Y' = AY + F.

Our development for the variations of parameters formula is formal in the sense that we assume the calculus for the exponential function in order to give a plausible structure for a solution. That the solution does provide a solution will be verified for each example.

Suppose that Y(t) satisfies the differential equation. Consider the function exp(-tA) Y(t). We take the derivative formally:

( exp(-tA) Y(t) )' = exp(-tA)Y'(t) - exp(-tA)AY(t)

= exp(-tA) (Y'(t) - AY(t))

= exp(-tA) F(t).

Integrate both sides from 0 to t:

exp(-tA)Y(t) - Y(0) = ,

or

(2.1) Y(t) = exp(tA)Y(0) + exp(tA) .

We will use this formula to compute and check solutions for each of the three examples listed in the introduction.

Periodic Solutions.

We are not satisfied to find solutions for the equation Y' = AY + F. We want more. We suppose that F is periodic and ask that we should be able to compute periodic solutions for the examples.

To this end, take L to be a number such that F(L) = F(0). We ask: what should Y(0) be in order that Y(L) = Y(0)? Using the form from equation (2.1) which we anticipate will provide solutions, we have

Y(L) = exp(L A)Y(0) + exp(LA) .

If Y(L) = P = Y(0), then

P = exp(LA)P + exp(LA) ,

or

(2.2) P = exp ( LA )

This formula provides the starting point for the computation of a periodic solution. There are some points for concern. We need an inverse; does this inverse exist for each of the problems?

This formula purports to provide an initial value for a periodic solution. Starting with this initial value, are the computed solutions periodic? We show that the answer to this question should be yes from similar analysis to the above as follows. With the supposition that F is periodic with period L, let Z(t) be defined by the equation (2.1)

Z(t) = exp(tA)P + exp(tA) .

Let X(t) be defined as Z(t+L).

X'(t) = Z'(t+L) = A Z(t+L) + F(t+L)

= A X(t) + F(t).

Also, X(0) = Z(L) = exp(L A)P + exp(L A) .

Substitute the definition for P from above, factor, and this X(0) reduces to P. Thus, the function X(t) satisfies the same differential equation and the same initial value as Z(t). Thus,

Z(t) = Z(t+L),

and the function Z(t) is periodic.

The examination of the examples with these formulas is our intent.

Asymptotic Properties

Often in physical systems which have a periodic forcing function, if a solution begins at some initial position different from the one which produces the periodic solution, the values of this solution will converge as t increases to the values of the periodic solution. This is true for each of the examples we examine. This observation can be made precise.

We suppose that P and Q are in the domain of A and that u and v satisfy the equation

Y' = AY + F

with u(0) = P and v(0) = Q. There is a number c such that

| u(t) - v(t) | exp(c t) | P - Q |.

In fact, the largest eigenvalue of A can be chosen as c.

To see that the magnitude for the difference in values of u and v is as suggested, one has only to use equation (2.1). There, we see that

| u(t) - v(t) | | exp( t A) ( P - Q) |.

The operator norm for the linear operator exp(t A) will finish the explanation for the inequality. We resist that discussion, but try to make clear what is exp( t A ).

The importance of this inequality, however, is recognized as follows. For the three examples stated in the introduction, the number c is negative. The result is that if a solution for any of the three equations begins at a point which does not produce the periodic solution, the values of the solution quickly move toward the periodic solution. That periodic solution is an attracting stable solution.

Section 3: Construction of Exp(tA)

If c and P are numbers, then exp(c t) P is a familiar function known from earliest experiences with mathematics. It provides a solution for Y' = c Y, Y(0) = P.

> y:=t->exp(c*t)*P;
diff(y(t),t)-c*y(t);
y(0);

Maple would have chosen this function if we had asked for a solution for the differential equation.

> restart:
dsolve({diff(y(t),t)=c*y(t),y(0)=P},y(t));

We illustrate the character of solutions for this equation by choosing c = -2 and graphing solutions for several values of P

> c:=-2;
sol:=dsolve({diff(y(t),t)=c*y(t),y(0)=a},y(t));

> y1:=subs(a=1/3,rhs(sol)):
y2:=subs(a=5/3,rhs(sol)):
y3:=subs(a=-1/2,rhs(sol)):

> plot([y1,y2,y3],t=0..2,y=-1..2,color=[red,blue,black]);

Consider the Example 2. If A is an n x n matrix, then the computation of exp(t A) is often a topic covered in a first course in differential equations, especially in the section on systems of differential equations. If P is an n dimensioinal vector, then Z(t) = exp(tA)P defines a function with values n dimensional vectors and provides a solution for the system Z'(t) = A Z(t), Z(0) = P.

> restart:
with(linalg):
A:=matrix([[-2,1],[1,-2]]);

```Warning, the protected names norm and trace have been redefined and unprotected
```

> exponential(t*A);

> P:=[a,b];

> evalm(exponential(t*A)&*P);

> E1:=(%)[1]:
E2:=(%%)[2]:
x:=unapply(E1,t);
y:=unapply(E2,t);

We verify that

=

with

= .

> simplify(diff(x(t),t)-(-2*x(t)+y(t)));

> simplify(diff(y(t),t)-(x(t)-2*y(t)));

> [x(0),y(0)];

We illustrate the character of solutions for this equation by drawing four graphs for the solutions of the differential equation starting at different initial values.

> y1:=subs({a=2,b=3},[x(t),y(t),t=0..2]):
y2:=subs({a=-2,b=3},[x(t),y(t),t=0..2]):
y3:=subs({a=-2,b=-3},[x(t),y(t),t=0..2]):
y4:=subs({a=2,b=-3},[x(t),y(t),t=0..2]):

> plot([y1,y2,y3,y4],color=[blue,black,red,green]);

>

We turn now to Example 3. The computation of exp(tA) in case

A(f) = f '', with f(0) = 0 = f(1)

may be less familiar, so we say more than we did with the previous two examples. First, we record the eigenvalues and eigenvectors for this A:

= and = .

This is a complete set of eigenfunctions in the sense that if f is in then

f(x) =

where is formed by the Fourier coefficient

= .

Convergence of the infinite sum is in norm.

We illustrate with a graph. In this example, the convergence of the infinite sum is uniform on [0, 1]. We will not try to make the infinite sum, but rather use only five terms.

> restart:
f:=x->10*x^2*(1-x)^2;

> for n from 1 to 11 do
fc[n]:=int(f(x)*sin(n*Pi*x),x=0..1)/
int(sin(n*Pi*x)^2,x=0..1);
od;
n:='n':

> approx:=x->sum(fc[n]*sin(n*Pi*x),n=1..11);

> plot([f(x),approx(x)],x=0..1,color=[red,blue]);

>

Just as we can construct the identity function from the eigenvectors, so we construct A from these eigenvectors and the eigenvalues:

A(f)(x) = .

We illustrate with a graph.

> Afprox:=x->sum(-n^2*Pi^2*fc[n]*sin(n*Pi*x),n=1..11);

> plot([Afprox(x),diff(f(x),x,x)],x=0..1);

> diff(f(x),x,x);

>

We are now ready to say what is exp(tA). As before, it should be true that if P is in the domain of A, then Z(t) = exp(tA)P is a function which satisfies the differential equation Z'(t) = AZ. In this example, for a fixed t, Z(t) is itself a function. In fact, Z(t) is a function of x. Instead of using the notation Z(t)(x), we follow convention and write Z(t, x). Thus, what appears to be an ODE in the function space is our partial differential equation

= , Z(t, 0) = 0 = Z(t, 1).

Z(0) = f.

With a little knowledge of the techniques for separation of variables, the way to make exp(tA) is clear:

exp(tA) f = .

We verify that such a sum forms a solution.

> Z:=(t,x)->sum(exp(-n^2*Pi^2*t)*fc[n]*sin(n*Pi*x),n=1..11);

> diff(Z(t,x),t)-diff(Z(t,x),x,x);

We check that Z has the right initial value by comparing the graphs of Z(0, x) and f(x). We off-set the graph of f(x) by 0.001 so that it can be distinguished from the graph of Z(0, x).

> plot([Z(0,x),f(x)+0.001],x=0..1);

To follow the pattern of the previous explanations, we should draw a graph for a solution. But we also should expect the resulting graph to follow the pattern for the above graphs. That is, we expect solutions should collapse to zero as t grows.

> plot3d(Z(t,x),x=0..1,t=0..1/5,orientation=[-130,55],
axes=NORMAL);

Confident that we can compute the expontential for each of the examples in Section 1, we next use this exponential to compute solutions for the differential equations with a periodic forcing function.

>

Section 4:Computations

In this section, we use the results of Section 2, together with the exponential function as explored in Section 3, to find a periodic solution for the model equations of Section 1.

The first equation was y'(t) = - 2 y(t) + sin(t). By Equation 2.2, the initial value for the periodic solution is

P =

where L in this example is . We compute each part of this formula.

> L:=2*Pi;
int(exp(2*s)*sin(s),s=0..L);

> expand(exp(-2*L)*%);

> P:=simplify(expand(%/(1-exp(-2*L))));

We now solve the differential equation with this initial value.

> dsolve({diff(y(t),t)=-2*y(t)+sin(t),y(0)=P},y(t));

We can observe that this solution is periodic. We plot this solution and two others to observe the asymptotic property of solutions.

> sol:=dsolve({diff(y(t),t)=-2*y(t)+sin(t),y(0)=a},y(t));

> y1:=subs(a=1,rhs(sol)):
y2:=subs(a=P,rhs(sol)):
y3:=subs(a=-1,rhs(sol)):

> plot([y1,y2,y3],t=0..2*Pi,y=-1..1,color=[blue,red,black]);

>

The second example is a system of equations

, .

Equation 2.2 gives the initial value for this system as

P = exp ( LA )

where L in this example is also . We define A and compute the various parts to make the initial value.

> with(linalg):

```Warning, the protected names norm and trace have been redefined and unprotected
```

> L:=2*Pi;
A:=matrix([[-2,1],[1,-2]]);

> expA:=exponential(-s*A);

The first part is the product of exp(-s A) and F(s) = [cos(s), sin(s) ].

> part1:=evalm(expA&*vector([cos(s),sin(s)]));

The second part is the integral of these functions.

> part2:=matrix([[int((part1)[1],s=0..L)],
[int((part1)[2],s=0..L)]]);

The third part is the product of exp(L A) and the integral.

> part3:=evalm(exponential(L*A)&*part2);

Part 4 is a simplification of the above.

> part4:=matrix([[simplify(expand(part3[1,1]))],
[simplify(expand(part3[2,1]))]]);

We make the inverse. First, we define the 2 x 2 identity matrix and let part 5 be the product of the inverse and what has come before. This product is best simplified and separated into the two rows of its vector components.

> id:=matrix([[1,0],[0,1]]);

> part5:=evalm(inverse(id-exponential(2*Pi*A))&*part4):

> r1:=simplify(expand(part5[1,1]));
r2:=simplify(expand(part5[2,1]));

These are the initial values for x and y which define the periodic solution.

> dsolve({diff(x(t),t)=-2*x(t)+y(t)+cos(t),
diff(y(t),t)=x(t)-2*y(t)+sin(t),x(0)=r1,y(0)=r2},
{x(t),y(t)});

We can observe that this solution is periodic. We plot this solution and three others to observe the asymptotic property of solutions.

> sol:=dsolve({diff(x(t),t)=-2*x(t)+y(t)+cos(t),
diff(y(t),t)=x(t)-2*y(t)+sin(t),x(0)=a,y(0)=b},
{x(t),y(t)});

> assign(%);

> x(t);y(t);

> y1:=subs({a=1,b=2},[x(t),y(t),t=0..2*Pi]):
y2:=subs({a=r1,b=r2},[x(t),y(t),t=0..2*Pi]):
y3:=subs({a=-1,b=-2},[x(t),y(t),t=0..2*Pi]):
y4:=subs({a=-1,b=2},[x(t),y(t),t=0..2*Pi]):

> plot([y1,y2,y3,y4],color=[black,red,green,blue]);

>

The third example is the partial differential equation

= + 10 sin(2 t) , u(t, 0) = 0 = u(t, 1).

Just as for the other examples, Equation 2.2 gives the initial value for the periodic solution for this system as

P = exp (LA) .

What is different in this example from the others is the techniques for computing the parts for this initial value -- for this initial function.

To make the computation of the initial value for the periodic solution for the third example easier to understand, there are two useful ideas that should be recalled.

Useful idea 1. If is an orthogonal sequence and both A and B are linear mappings defined by

A(f) =

and B(g) = ,

for all f and g, and with and the Fourier coefficients for f and g with respect to , then

A( B( f)) = .

Useful idea 2. In the setting described above

exp( tA)(f) = exp( t) .

To keep down the clutter, we write the third model as

Y'(t) = A Y(t) + g(t) h,

where we identify A as the differential operator AY = Y / with boundary conditions, and

and define

g(t) = sin(2 t), f(x) = 10 , and F(t, x) = g(t) f(x).

We develop a realization for

P = exp ( LA ) .

We recognize

exp(-s A)

as

.

We integrate this:

= .

Using the first idea again, we operate on this vector with exp(L A) :

exp ( LA ) = .

Finally, we operate on this last with the inverse indicated in Equation 2.2.

exp (LA) =

In passing, we are reminded that these computations will not work if any eigenvalue is zero, for in that case, the denominator would be zero.

We now perform these calculations for the special case of Example 3. The boundary value problem is over the interval [0, 1], so that L = 1. We decide at the start how many terms we will choose with the variable N.

> restart:L:=1; N:=5;

We perform each of the integrals in the above formula, not yet identifying what is the lambda.

> g:=s->sin(2*Pi*s);

> Part1:=int(exp(-s*lambda)*g(s),s=0..L);

We multiply the integral by the exponential term.

> Part2:=exp(lambda*L)*Part1;

And, we divide this product by the inverse term.

> Part3:=Part2/(1-exp(lambda*L));

> factor(expand(Part3));

These calculations have enabled us to pick out the coefficients of . It is appropriate to identify the .

> for n from 1 to N do
c[n]:=-2*Pi/(n^4*Pi^4+4*Pi^2);
od;
n:='n';

This gives us a formula for the initial value. We record that formula here.

> P:=x->sum(c[n]*f[n]*sin(n*Pi*x),n=1..N);

It should be that P(0) = 0 and P(1) = 0. This can be checked.

> P(0); P(1);

This function P is the initial distribution for the periodic solution of the equation

Y '(t) = AY(t) + g(t) f.

We will draw its graph after we define f and the resulting 's.

There are two terms in the solution. One term is exp(t A) P, where P is as we have just determined. Note that this term should solve Y '(t) = AY, Y(0) = P.

We use this fact as a check.

> u1:=(t,x)->sum(exp(-n^2*Pi^2*t)*c[n]*f[n]*sin(n*Pi*x),n=1..N);

> diff(u1(t,x),t)-diff(u1(t,x),x,x);

> u1(t,0); u1(t,1);

> u1(0,x)-P(x);

The second term is the addition of the non-homogeneous part -- the forcing function. This term may be so long as to be cumbersome. We suppress the printing.

> sum(exp(-n^2*Pi^2*t)*int(exp(s*n^2*Pi^2)*g(s),s=0..t)*
f[n]*sin(n*Pi*x),n=1..N):
u2:=unapply(%,(t,x)):

This term should solve the full equation, the boundary conditions, and have initial value 0.

> simplify(diff(u2(t,x),t)-diff(u2(t,x),x,x)-
sin(2*Pi*t)*sum(f[n]*sin(n*Pi*x),n=1..N));

> u2(t,0); u2(t,1);

> u2(0,x);

Finally, we make u as the sum of u1 and u2. Without checking we know that this is the correct answer.

> u:=(t,x)->u1(t,x)+u2(t,x);

But, we check, anyway.

> simplify(diff(u(t,x),t)-diff(u(t,x),x,x)-
sin(2*Pi*t)*sum(f[n]*sin(n*Pi*x),n=1..N));

> simplify(u(0,x)-u(1,x));

> u(t,0); u(t,1);

>

It is time to draw graphs and see this work. Choose F as in the Example 3.

> Frc:=x->10*x^2*(1-x)^2;

> for n from 1 to N do
f[n]:=int(Frc(x)*sin(n*Pi*x),x=0..L)/
int(sin(n*Pi*x)^2,x=0..L):
od:
n:='n';

First, we check these coefficients by comparing the graph of the Fourier approximation with the graph the forcing function, off set by 0.01.

> approx:=x->sum(f[n]*sin(n*Pi*x),n=1..N);

> plot([Frc(x),approx(x)+0.01],x=0..L);

Next, we draw the graph of P.

> plot(P(x),x=0..1);

We draw the graph of the surface u(t,x).

> plot3d(u(t,x),x=0..1,t=0..2,axes=NORMAL);

We watch an animation comparing the forcing function and the solution.

> with(plots):
animate({u(t,x),sin(2*Pi*t)*Frc(x)},x=0..1,t=0..2);

```Warning, the name changecoords has been redefined
```

We watch the term with initial value zero coalesce with this periodic solution.

> animate({u2(t,x),u(t,x)},x=0..1,t=0..1);

>

Section 5: Alternate Methods

Consider again the Example 3.

= + 10 sin(t) , u(t, 0) = 0 = u(t, 1).

The method we presented in Section 4: Computations for finding periodic solutions for this equation puts this problem into a setting where the structure is the same as that for the ordinary differential equations setting. There is a satisfaction in making such a unity of structure between ODE's and PDE's.

The method for obtaining solutions for the non-homogeneous equation and the alternate method for obtaining periodic solutions presented in this section takes advantage of the uniqueness for the representation of a function in terms of a predetermined orthogonal basis and takes advantage of the power of a computer algebra system such as this one.

Here, we recall standard methods and techniques for the computation of solutions for non-homogeneous partial differential equations. These methods are illustrated in the text listed in the Reference. In that text, there is a chapter for which the non-homogeneous equation is the subject. The techniques in the next part are the ones used. However, that book does not find periodic solutions. In fact, the author is not aware of a source to illustrate finding such solutions.

Computing solutions for the homogeneous equation.

For generality, we rewrite the nonhomogeneous equation as

= + , u(t, 0) = 0 = u(t, 1).

As before, we take A to be defined on [0,1] with

A(f) = f '' and f(0) = 0 = f(1).

Recall that eigenvalues and eigenfunctions for this A are

= and (x) = sin(n x).

Using this orthogonal family, we suppose the solution is

u(t, x) = .

Substitute this into the original equation and we find the form

= +

with

= .

Thus, we seek solutions for an infinite system of numerical differential equations

(5.1) .

While this is in reality an infinite sequence of differential equations, we construct a finite number of these and use these to approximate u(t, x).

To get a specific solution, we need an initial value. We choose a function in the domain of A. We take u(0, x) to be and F(t, x) to be . We do not expect this to lead to a periodic solution, but we expect the solution to coalesce to one.

Here is the initialization. We compute only five terms of the approximation.

> N:=5;
u0:=x->x*(1-x);
F:=(t,x)->sin(2*Pi*t)*x^2*(1-x);

Here is the computation of solutions for five of the numerical ordinary differential equations.

> for n from 1 to N do
denomin:=int(sin(n*Pi*x)^2,x=0..1):
T0:=int(u0(x)*sin(n*Pi*x),x=0..1)/denomin:
gn:=t->int(F(t,x)*sin(n*Pi*x),x=0..1)/denomin:
eq:=diff(Y(t),t)=-n^2*Pi^2*Y(t)+gn(t):
sol:=dsolve({eq,Y(0)=T0},Y(t)):
T[n]:=unapply(rhs(sol),t);
od:
n:='n';

Here is the definition for the approximation of the solution.

> u:=(t,x)->sum(T[n](t)*sin(n*Pi*x),n=1..N);

Here is a plot of this solution. Look carefully to see the wave induced by the periodic solution.

> plot3d(u(t,x),x=0..1,t=0..2,axes=normal);

Here is a check that this solution satisfies the five term approximation.

> expand(simplify(combine(diff(u(t,x),t)-diff(u(t,x),x,x)-
sum(2*int(F(t,x)*sin(n*Pi*x),x=0..1)*sin(n*Pi*x),n=1..N),
trig)));

>

Periodic Solutions

We now return to the problem of finding a periodic solution for this heat equation with periodic forcing function. As in the previous example of this section, we use the Fourier expansion for the forcing function to change the partial differential equation into an infinite collection of ordinary differential equations. These infinity of ordinary differential equations are one dimensional equations. We solve them with a yet to be determined initial value. That initial value is chosen to create a periodic solution. The techniques for these one dimensional equations are the same techniques of Example 1 in Section 4, Computations .

Here is Maple syntax to make an approximation for the solution. Be reminded that we do not need to specify an initial distribution in this problem. Indeed, the problem is to find the initial distribution which generates the periodic solution.

First, we decided how many terms N we will use to approximation the solution and we specify the periodic forcing function F(t, x).

> restart;

> N:=5;
F:=(t,x)->sin(2*Pi*t)*x^2*(1-x);

Next, we solve the N equations 5.1 which generate periodic solutions.

> for n from 1 to N do
denomin:=int(sin(n*Pi*x)^2,x=0..1):
gn:=t->int(F(t,x)*sin(n*Pi*x),x=0..1)/denomin:
eq:=diff(Y(t),t)=-n^2*Pi^2*Y(t)+gn(t):
sol:=dsolve({eq,Y(0)=c},Y(t)):
YY:=unapply(rhs(sol),t):
cc:=solve(YY(0)=YY(1),c):
T[n]:=unapply(subs(c=cc,YY(t)),t):
od:
n:='n';

We plot these N equations. They may be so small that we need to amplify them. Here, they are amplified by a power of 2 so that they can be seen in the graphs.

> plot([seq(2^n*T[n](t),n=1..N)],t=0..1);

We compose the solution u(t, x).

> u:=(t,x)->sum(T[n](t)*sin(n*Pi*x),n=1..N);

We draw the graph of the periodic solution for the partial differential equation.

> plot3d(u(t,x),x=0..1,t=0..1,axes=normal);

As a check, we draw the graph of u(0, x) and u(1, x) to see that they look alike. The graphs may be so close that it is of value to off-set the graph of one of these so that they can be distinguished.

> plot([u(0,x),u(1,x)+0.0001],x=0..1);

Finally, we check that we really have a solution.

> expand(simplify(combine(
diff(u(t,x),t)-diff(u(t,x),x,x)-
sum(2*int(F(t,x)*sin(n*Pi*x),x=0..1)*sin(n*Pi*x),
n=1..N),trig)));

>

>

>

REFERENCE

George A. Articolo, Partial Differential Equations & Boundary Value Problems with Maple V , Academic Press, 1998.