Contents Previous Next Index

7 Numerical Programming in Maple


An important part of efficient scientific and mathematical programming is numerical computation. Maple provides many tools for computing with floatingpoint numbers, some for improving efficiency and some for improving accuracy.

7.1 In This Chapter


•

An Overview of Numeric Types in Maple

•

An Explanation of FloatingPoint Numbers in Maple

•

Maple Commands for Numerical Computing

•

Efficient Numerical Programs



7.2 Numeric Types in Maple


Before discussing numerical computing in Maple, we will first introduce the various numeric data types used in Maple and briefly describe how they are represented. All of the real numbers discussed in this section will pass checks of type,numeric or type,extended_numeric.

Integers


The most basic numeric type in Maple is the integer. Small integers are represented directly as hardware integers (similar to the int type in C or integer type in Fortran), which allows for maximum efficiency of both CPU time used for arithmetic and memory used for storage. That is, the number can be stored in one machine word and arithmetic operations can be performed with one CPU operation. On 32bit architectures, integers in the range ${2}^{30}1$ to ${2}^{30}1$ are stored in this way, while on 64bit architectures, integers in the range ${2}^{62}1$ to ${2}^{62}1$. Integers stored in this way are referred to as immediate integers.
Larger integers are stored in DAGs of type INTPOS or INTNEG, which contain pointers to arrays of digits that can store integers up to magnitude ${\mathrm{10}}^{{2}^{28}218}$ on 32bit architectures and ${\mathrm{10}}^{{2}^{35}\+{2}^{32}18}$ on 64bit architectures.
INTPOS(6): 1208925819614629174706175
 
INTNEG(6): 2535301200456458802993406410746
 
The arithmetic for these large integers is computed using the GNU Multiple Precision Arithmetic (GMP) library. This library is quite efficient, but still several orders of magnitude slower than arithmetic on immediate integers since each arithmetic operation will require more than one CPU operation and the larger the integer, the more operations and memory will be needed for arithmetic.
>

CodeTools:Usage(add(i,i=2^15..2^16));

memory used=161.02KiB, alloc change=0 bytes, cpu time=4.00ms, real time=3.00ms, gc time=0ns
 
>

CodeTools:Usage(add(i,i=2^882^15..2^88+2^16));

memory used=8.92MiB, alloc change=0 bytes, cpu time=16.00ms, real time=16.00ms, gc time=0ns
 
${30423923890487326980991212339200}$
 (2) 
>

CodeTools:Usage(add(i,i=2^40972^15..2^4097+2^16));

memory used=109.34MiB, alloc change=3.01MiB, cpu time=562.00ms, real time=325.00ms, gc time=499.32ms
 
 (3) 
Any transitions between GMP integers and immediate integers will be completely transparent and it is not possible to tell them apart in general without use lowlevel tools such as addressof. However, you can check if an integer is small enough to fit into a single machine word with types integer[4] and integer[8] for 4byte and 8byte words respectively.
Integers of all types pass a type,integer type check.
The Integer constructor is guaranteed to return an integer, an extended numeric symbol such as infinity or undefined, a complex number with integer parts, or return unevaluated.
${\mathrm{1461501637330902918203684832716283019655932542976}}$
 (4) 
${\mathrm{Integer}}{}\left(\frac{{1}}{{2}}\right)$
 (6) 
The system dependent value for the largest immediate integer can be found with kernelopts(maximmediate), the maximum number of decimal digits in an integer can be found with kernelopts(maxdigits), and the version of the GMP library being used can be found with kernelopts(gmpversion).


Rationals


Exact rational numbers are stored in DAGs of type RATIONAL, which consist of a pair of integers. The first integer is the numerator and can be a POSINT or NEGINT. The second integer is the denominator and is a POSINT. Most lowlevel programming languages such as C or Fortran do not have an equivalent rational number type.
RATIONAL(3): 1/2
INTPOS(2): 1
INTPOS(2): 2
 
RATIONAL(3): 10/3
INTNEG(2): 10
INTPOS(2): 3
 
Rational numbers can be constructed by using the division operator or the Fraction constructor. In either case, automatic simplification will occur to ensure that the denominator is positive and that the fraction is in lowest terms (the numerator and denominator do not have factors in common). This means that the Fraction constructor may return integers in some cases.
>

dismantle(Fraction(21,7));

>

dismantle(Fraction(40,14));

RATIONAL(3): 20/7
INTNEG(2): 20
INTPOS(2): 7
 
Rational number arithmetic is performed in the natural way using integer arithmetic and the igcd and ilcm operations to reduce to lowest terms.
>

Fraction(2^20+2^12,2^272^13) + Fraction(2^121,2^13);

$\frac{{68141057}}{{134209536}}$
 (7) 
>

Fraction(2^20+2^12,2^272^13) * Fraction(23,187);

$\frac{{5911}}{{6127242}}$
 (8) 
Rational numbers of all types will pass a type,rational type check. Only rational numbers that are not also integers will pass a type,fraction type check. Additionally, type,extended_rational includes all rationals as well as the extended numeric symbols infinity, infinity, and undefined.
Like the Integer constructor, the Fraction constructor will return unevaluated if it cannot return a value of type extended_rational.
${\mathrm{Fraction}}{}\left({x}{\,}{1}\right)$
 (10) 


FloatingPoint Numbers


Floatingpoint numbers are stored in DAGs of type FLOAT.
In Maple, as in nearly every programming language, floatingpoint numbers can be constructed using and visually distinguished from integers with a decimal point symbol, '.'. The floatingpoint number $1.$ is often treated as equal to the exact integer $1$.
Maple floatingpoint numbers can also be constructed with the SFloat constructor (or the equivalent Float constructor) and can be checked with the nearly equivalent type,sfloat and type,float types. We will generally refer to these numbers as sfloats to when we need to distinguish them from hardware floatingpoint numbers (hfloats), introduced below.
>

dismantle(SFloat(0.3333));

FLOAT(3): .3333
INTPOS(2): 3333
INTNEG(2): 4
 
A floatingpoint number represents a rational number with a fixed precision. That rational number can be recovered with convert/rational.
>

convert(.3333333333, rational, exact);

$\frac{{3333333333}}{{10000000000}}$
 (17) 
However, not every rational number can be represented exactly by a floatingpoint number. For example, the closest floatingpoint number to $\frac{1}{3}$ is $0.3333333333$.
Also, unlike numeric types integer and rational, integer and float do not have compatible arithmetic. Floatingpoint arithmetic has a fixed finite precision, and does round off if the result of arithmetic does not fit into that precision.
>

9123456789 + 8123456789 <> convert( 9123456789. + 8123456789., rational, exact);

${17246913578}{\ne}{17246913580}$
 (19) 
>

123456 * 1234567 <> convert( 123456.*1234567., rational, exact);

${152414703552}{\ne}{152414703600}$
 (20) 
Unlike many other programming languages the precision of sfloat arithmetic can be changed. For this reason, sfloats are known as arbitrary precision floatingpoint numbers.
More information on sfloats and how they differ from the floatingpoint types in languages such as C and Fortran will be discussed in greater detail in More about FloatingPoint Numbers in Maple.


Hardware FloatingPoint Numbers


Floatingpoint numbers of the type used in languages such as C and Fortran can also be constructed in Maple; they are known as hardware floatingpoint numbers or hfloats. These types are stored as 8byte double precision IEEE floatingpoint numbers contained in DAGs of type HFLOAT. Since the . notation is used to construct Maple sfloats, hfloats must be constructed with the HFloat constructor. Maple will display sfloats and hfloats the same way, using decimal notation.
>

dismantle(HFloat(0.3333));

The advantage of hfloats over sfloats is that their arithmetic is computed directly using a single CPU operation for each arithmetic operation. Maple sfloats, however, offer much more flexibility and precision. In many ways the difference is analogous to the difference between immediate integers and GMP integers.
Hardware floats can be distinguished from sfloats with the type,hfloat type.
>

type(HFloat(1), float);

>

type(HFloat(1), sfloat);

>

type(HFloat(1), hfloat);

>

type(SFloat(1), hfloat);

For more information on hardware floats and how they differ from sfloats, see More about FloatingPoint Numbers in Maple.


Extended Numeric Types


The special builtin symbols infinity ($\mathrm{\infty}$), and undefined can be used in numeric arithmetic in Maple. In general, operations involving $\mathrm{\infty}$ simplify automatically to a signed infinity or a complex infinity. For details, refer to the type,infinity help page.
${}{\mathrm{\infty}}$
 (26) 
The undefined symbol is usually produced as the result of attempting to carry out an operation that cannot result in a number for the given operands. Almost every arithmetic operation involving undefined returns undefined. For details, refer to the type,undefined help page.
${\mathrm{undefined}}$
 (29) 
${\mathrm{undefined}}$
 (30) 
${\mathrm{undefined}}$
 (31) 
Integer and rational numbers share exact undefined and infinite symbols while sfloat and hfloat numbers have their own versions of these, which are displayed differently but treated similarly.
${Float}{}\left({\mathrm{\infty}}\right)$
 (32) 
${Float}{}\left({\mathrm{undefined}}\right)$
 (33) 


Complex Numbers


A complex number in Maple is a DAG of type COMPLEX, which consists of a pair of any of the two numeric types. They can be constructed in the natural way using the symbol $\mathrm{I}$ for the imaginary unit, or using the Complex constructor.
COMPLEX(3)
INTPOS(2): 1
INTPOS(2): 1
 
>

dismantle(Complex(1/2,1/3));

COMPLEX(3)
RATIONAL(3): 1/2
INTPOS(2): 1
INTPOS(2): 2
RATIONAL(3): 1/3
INTPOS(2): 1
INTPOS(2): 3
 
Automatic simplification will ensure that if one of the parts of a complex number is a float (or hfloat), then other will be made into a float (hfloat).
>

dismantle(Complex(1., 1/1001));

COMPLEX(3)
FLOAT(3): 1.
INTPOS(2): 1
INTPOS(2): 0
FLOAT(3): .9990009990e3
INTPOS(2): 9990009990
INTNEG(2): 13
 
>

dismantle(Complex(HFloat(1.), 1/1001));

COMPLEX(3)
HFLOAT(2): 1.
HFLOAT(2): .000999000999
 
>

dismantle(Complex(HFloat(1.), 1.));

COMPLEX(3)
HFLOAT(2): 1.
HFLOAT(2): 1.
 
Complex numbers are not of type type,numeric but can be checked with type type,complex which can also be structured to check for the numeric subtypes of its two components.
>

type(1+I,complex(integer));



Nonnumeric Constants


Many Maple expressions represent constants, but are not considered to be of type numeric. This means that arithmetic performed on these constants will be more generic symbolic operations on DAGs of type SUM, PROD, NAME, or FUNCTION. Some examples of nonnumeric constants are Pi ($\mathrm{\pi}$), $\mathrm{Catalan}$, $\mathrm{sin}\left(1\right)$, $\sqrt{5}$, and $\mathrm{\pi}\+{\ⅇ}^{2}\sqrt{1\+5\mathrm{Catalan}}$.
>

type(sqrt(5)1, constant);




7.3 More about FloatingPoint Numbers in Maple


To take full advantage of floatingpoint numbers and to avoid many common pitfalls in numerical computing, it is important to understand exactly what floatingpoint numbers are and how they are represented.

Representation of FloatingPoint Numbers in Maple


The dismantle command shows that the two numbers $1$ and $1.$ have different internal representations. $1$ is simply stored as an integer while $1.$ is stored as a pair of integers.
FLOAT(3): 1.
INTPOS(2): 1
INTPOS(2): 0
 
Similarly, the numbers $\frac{1}{2}$ and $0.5$ are also different even though they are both stored as pairs of integers.
RATIONAL(3): 1/2
INTPOS(2): 1
INTPOS(2): 2
 
FLOAT(3): .5
INTPOS(2): 5
INTNEG(2): 1
 
In Maple, the FLOAT DAGtype represents a floatingpoint number in the form S * 10^E where both S and E are integers. For $1.$, the significand (or mantissa) is $S\=1$ and the exponent is $E\=0$. In addition to being specified in decimal notation, floats of this form can be constructed by using scientific notation, or the Float constructor.
The advantage of using this significandexponent representation is that fixed precision approximations of large and small numbers can be stored compactly and their arithmetic can be done efficiently. Storing the integer 10^50 needs at least 167 bits or 3 words on a 64bit machine. The floatingpoint number 1e50 can be stored in less than 8 bits but in in practice uses 2 words (one for each integer).
INTPOS(8): 100000000000000000000000000000000000000000000000000
 
FLOAT(3): .1e51
INTPOS(2): 1
INTPOS(2): 50
 
Using two immediate integers, a float can store a much larger range of numbers than a rational number with two immediate integers. The range a rational can represent is about $1.{10}^{9}..1.{10}^{9}$ while a float can represent a range of about $1.{10}^{1073741823}..9.{10}^{1073741823}$. This is a much larger range for the same storage cost. Of course, that larger range means that floats of a fixed size can represent fewer numbers in that range. And since floatingpoint numbers are always of a fixed size, this means that arithmetic will always be of limited precision. That is, each operation will have to round the result to a number that can be represented as another floatingpoint number.
In Maple, the significand is limited to 10 decimal digits of precision by default but can be changed while the exponent is restricted to being a wordsized integer.
More information on the restrictions on the size of software floats in Maple can be found by using the Maple_floats command.
By contrast, hfloats, are represented in base 2, rather than base 10. So they represent numbers using the form S * 2^E, where the significand, S, is a 52bit integer and the exponent, E, is a 10bit integer. Thus, the range of numbers representable as hardware floats is $2.225073859{10}^{308}..1.797693135{10}^{308}$. Because the largest possible significand of a hardware float has about $\mathrm{floor}\left({\mathrm{log}}_{10}\left({2}^{52}\right)\right)\=15$ base10 digits of precision, hardware floats can be converted to software floats without meaningful loss of precision when Digits is 15. Conversely, so long as their exponent is smaller than 307 and their significand had fewer than 15 digits sfloats can be converted to hfloats without loss of precision.


Precision and Accuracy


By default, 10digit precision is used for floatingpoint arithmetic, which means that the arithmetic will be rounded to 10 digits. This means any single floatingpoint operation will be accurate to 10 digits.
For example, storing 10^50+1 requires 50 decimal digits so it will be rounded in floatingpoint arithmetic. By contrast, 10^50+10^41 can be stored with 10 digits so it will still be computed accurately.
${1.}{\times}{{10}}^{{50}}$
 (40) 
${1.000000001}{\times}{{10}}^{{50}}$
 (41) 
The Digits environment variable can be used to change the working precision used by Maple. Larger values of Digits will allow more accurate computation, but at the cost of slower arithmetic.
${1.00000000000000000000000000000000000000000000000001}{\times}{{10}}^{{50}}$
 (42) 
The maximum value for Digits is system dependent and can be found with the Maple_floats command.
>

Maple_floats(MAX_DIGITS);

For the default value of Digits, the significand is an immediate integer and so arithmetic will be fast in general. It also means that some numerical function evaluations (such as sin in the following example) will be able to use the CPU's native hardware floatingpoint arithmetic to achieve the needed precision. However, raising Digits about the default value will lead to slower arithmetic and slower function evaluation.
>

CodeTools:CPUTime(add(sin(1e6*i),i=1..100000));

${1.300}{,}{4995.884639}$
 (44) 
>

CodeTools:CPUTime(add(sin(1e6*i),i=1..100000));

${6.494}{,}{4995.884638682140998954}$
 (45) 
Reducing Digits below its default value does not usually lead to large improvements in performance.
>

CodeTools:CPUTime(add(sin(1e6*i),i=1..100000));

${1.202}{,}{4996.0}$
 (46) 
It is also important to note that changing Digits does not necessarily change the accuracy of sequences of multiple floatingpoint computations; it changes only the precision of the individual operations performed. The following example computes two additions using 10 digits of precision, but catastrophic cancellation leads to a mere one digit of accuracy in the final answer.
${x}{\u2254}{1.234567890}{\times}{{10}}^{{9}}$
 (47) 
${y}{\u2254}{\mathrm{1.234567889}}{\times}{{10}}^{{9}}$
 (48) 
${z}{\u2254}{3.141592654}$
 (49) 
${4.}{\ne}{4.141592654}$
 (50) 
Ensuring accuracy requires careful study of the problem at hand. In this example, you need 19 digits of precision to get 10 digits of accuracy.
${4.141592654}{=}{4.141592654}$
 (51) 


FloatingPoint Contagion


An important property of floatingpoint numbers in Maple, and nearly every other computing environment, is contagion. When numerical expressions are created involving both floatingpoint numbers and exact numbers, the floating property is contagious and causes the answer to become a floatingpoint number.
As you can see, this contagion property can be used as a quick method to convert exact values to floatingpoint numbers. However, while floatingpoint contagion extends to all Maple structures of type numeric (except, in some cases, hfloats), it does not apply to nonnumeric constants.
The hfloat type is also contagious, but the precise behavior of the contagion is determined by the UseHardwareFloats environment variable. By default, hfloats are contagious for small values of Digits:
>

type(4/3 + HFloat(0.), hfloat);

>

type(1. + HFloat(0.), hfloat);

>

HFloat(1.1) * sin(4*Pi/7) 1;

${1.10000000000000}{}{\mathrm{sin}}{}\left(\frac{{3}{}{\mathrm{\pi}}}{{7}}\right){}{1}$
 (59) 
For large values of Digits, hfloats in computations will be converted to sfloats so that the results are sfloats.
${\mathrm{Digits}}{\u2254}{20}$
 (60) 
>

type(1 + HFloat(0.), hfloat);

>

type(1 + HFloat(0.), sfloat);

If UseHardwareFloats=true then hfloats are completely contagious.
>

UseHardwareFloats := true;

${\mathrm{UseHardwareFloats}}{\u2254}{\mathrm{true}}$
 (63) 
${\mathrm{Digits}}{\u2254}{20}$
 (64) 
${a}{\u2254}{1.0000000000000000001}{\times}{{10}}^{{19}}$
 (65) 
${b}{\u2254}{1.00000000000000}{\times}{{10}}^{{19}}$
 (66) 
If UseHardwareFloats=false then hfloats will always be converted to sfloats in computations, regardless of the setting of Digits. The HFloat constructor will still create hfloats, however.
>

UseHardwareFloats := false;

${\mathrm{UseHardwareFloats}}{\u2254}{\mathrm{false}}$
 (68) 
${\mathrm{Digits}}{\u2254}{10}$
 (69) 
${c}{\u2254}{1.100000000}$
 (70) 
>

type(HFloat(0.1), hfloat);

Table 7.1 summarizes the floatingpoint contagion rules.
Table 7.1: FloatingPoint Contagion Rules 
UseHardwareFloats

true

false

deduced

deduced

Digits

any

any

1...15

16...

hfloat + hfloat

hfloat

sfloat

hfloat

sfloat

hfloat + sfloat

hfloat

sfloat

hfloat

sfloat

sfloat + sfloat

sfloat

sfloat

sfloat

sfloat




More on the FloatingPoint Model


The software floatingpoint system is designed as a natural extension of the industry standard for hardware floatingpoint computation, known as IEEE 754. Thus, there are representations for infinity and undefined (what IEEE 754 calls a NaN, meaning Not a Number) as discussed in Extended Numeric Types.
The IEEE 754 standard defines five rounding algorithms. Two methods called nearest and simple round to nearest values, and the other three are directed roundings that round up or down (as needed) towards $\mathrm{\infty}$, $\mathrm{\infty}$, or 0. Maple implements all of these rounding modes and the desired mode can be selected by setting the Rounding environment variable.
${\mathrm{nearest}}$
 (73) 
${\mathrm{Rounding}}{\u2254}{0}$
 (75) 
Another important feature of this system is that the floatingpoint representation of zero, 0., retains its arithmetic sign in computations. That is, Maple distinguishes between +0. and 0. when necessary. In most situations, this difference is irrelevant, but when dealing with functions that have a discontinuity across the negative real axis, such as $\mathrm{ln}\left(x\right)$, preserving the sign of the imaginary part of a number on the negative real axis is important.
For more intricate applications, Maple implements extensions of the IEEE 754 notion of a numeric event, and provides facilities for monitoring events and their associated status flags. For more information about this system, refer to the numerics help page.



7.4 Maple Commands for Numerical Computing


In this section we will discuss some of the commands available in Maple for floatingpoint computation.

The evalf Command


The evalf command is the primary tool in Maple for performing floatingpoint calculations in software floatingpoint mode. You can use evalf to compute approximations of nonnumeric constants.
You can alter the number of digits of the approximation by changing the value of the environment variable Digits, or by specifying the number as an index to evalf (which leaves the value of Digits unchanged).
${3.1415926535897932385}$
 (78) 
${3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303820}$
 (79) 
${1.4142135623730950488}$
 (80) 
Remember that the Digits command specifies the precision in decimal digits, unlike hardware floatingpoint numbers which specify precision in binary digits.
All floatingpoint computations are performed in finite precision, with intermediate results generally being rounded to Digits precision. As such, it is possible for roundoff errors to accumulate in long computations. Maple ensures that the result of any single floatingpoint arithmetic operation (+, , *, /, or sqrt) is fully accurate. Further, many of the basic functions in Maple, such as the trigonometric functions and their inverses, the exponential and logarithmic functions, and some of the other standard special functions for mathematics, are accurate to within .6 units in the last place (ulps), meaning that if the Digits + 1st digit of the true result is a 4, Maple may round it up, or if it is a 6, Maple may round it down. Most mathematical functions in Maple, including numerical integration, achieve this accuracy on nearly all inputs.
It is possible to create software floats with different precisions. Changing the value of Digits will not change these numbers; it affects only the precision of subsequent operations on those numbers.
${\mathrm{Digits}}{\u2254}{50}$
 (81) 
${a}{\u2254}{3.1415926535897932384626433832795028841971693993751}$
 (82) 
${\mathrm{Digits}}{\u2254}{10}$
 (83) 
${3.1415926535897932384626433832795028841971693993751}$
 (84) 
From this example, you can see that evalf can be used to create a lower precision float from one of higher precision. This can be used to round a result to a desired number of digits. However, evalf will not increase the precision of a low precision float.
${3.1415926535897932384626433832795028841971693993751}$
 (88) 
The evalf command also provides an interface to purely numerical computations of integrals, limits, and sums.
Some definite integrals have no closedform solution in terms of standard mathematical functions. You can use evalf to obtain a numerical answer directly using numerical techniques.
>

r := Int(exp(x^3), x=0..1);

${\mathrm{Int}}{}\left({\mathrm{exp}}{}\left({{x}}^{{3}}\right){\,}{x}{\=}{0}{..}{1}\right)$
 (89) 
${\mathrm{int}}{}\left({\mathrm{exp}}{}\left({{x}}^{{3}}\right){\,}{x}{\=}{0}{..}{1}\right)$
 (90) 
In other cases, Maple can find an exact solution, but the form of the exact solution is almost incomprehensible. The function $\mathrm{Beta}$ in the following example is a special function that appears in mathematical literature.
>

q := Int( x^99 * (1x)^199 / Beta(100, 200), x=0..1/5 );

${\mathrm{Int}}{}\left({277216764217237649652225568421760372018697733716243247244202243820317088554933908000}{}{{x}}^{{99}}{}{\left({1}{}{x}\right)}^{{199}}{\,}{x}{\=}{0}{..}\frac{{1}}{{5}}\right)$
 (92) 
$\frac{{27852290545780521179255248650434305998403849800909690342170417622052715523897761906828166964420518416902474524718187972029459617663867797175746341349064425727501861101435750157352018112989492972548449}}{{785454954447636248495323512797804102876034481999911930417847858749936840755474537033615661445973112364349371450421100562106866977667955024449202371857434152360496874313577908566230689757503569126129150390625}}$
 (93) 
${3.546007367}{\times}{{10}}^{{\mathrm{8}}}$
 (94) 
The two previous examples use the Int command rather than int for the integration. If you use int, Maple first tries to integrate the expression symbolically. Thus, when evaluating the following commands, Maple determines a symbolic answer and then converts it to a floatingpoint approximation, rather than performing direct numerical integration. In general, the symbolic computation is more difficult, and thus slower than the numerical computation.
>

evalf( int(x^99 * (1x)^199 / Beta(100, 200), x=0..1/5) );

${3.546007367}{\times}{{10}}^{{\mathrm{8}}}$
 (95) 
Similarly, evalf can be used on the inert forms Limit and Sum to compute using numerical algorithms for computing numeric limits and sums.
>

evalf(Limit(sin(erf(1)*x)/(erf(1)^2*x),x=0));

>

evalf( Sum(exp(x), x=RootOf(_Z^5+_Z+1)) );

${4.791792042}{+}{0.}{}{\mathrm{I}}$
 (97) 

When Not to Use evalf


In general the symbolic commands in Maple are able to handle floatingpoint numbers in their input, but, by their nature floats are not as precise as rationals or symbolic constants. So, even if you want a numerical answer from a command, you should avoid calling evalf on the input.
The following command does not compute the expected answer of $0.1111111111$.
>

limit(n*(evalf(1/3)  1/(3+1/n)), n=infinity);

${Float}{}\left({}{\mathrm{\infty}}\right)$
 (98) 
It would have been computed correctly with nonfloat values in the input.
>

evalf( limit(n*(1/3  1/(3+1/n)), n=infinity) );




Numeric Solvers


There are also a number of numerical algorithms available in Maple in commands other than evalf. One of the most important is fsolve which is short for floatingpoint solve. This command computes numerical solutions to equations or systems of equations. In general, it is much more efficient than calling evalf on the result of solve, especially if you are interested in only a single solution.
>

fsolve( exp(x) + 2*sin(x), x);

${\mathrm{0.3573274113}}$
 (100) 
The fsolve command is a sophisticated heuristic that chooses among many different algorithms depending on the input. There are several more special purpose solving tools available in the RootFinding package.
Several symbolic solvers in Maple also have numeric modes. The dsolve and pdsolve commands both accept a numeric option, which indicates that a numerical answer should be computed using purely numeric methods. For extensive information on these numeric commands, refer to the dsolve/numeric and pdsolve/numeric help pages.


The evalhf Command


Like evalf, evalhf computes an numerical approximation of its input. However, evalhf uses hardware floats in all intermediate calculations before returning an sfloat.
>

dismantle( evalhf(1/3) );

FLOAT(3): .333333333333333315
INTPOS(2): 333333333333333315
INTNEG(2): 18
 
The evalhf command is affected by the value of Digits, but since intermediate calculations are done with hfloats, at most 18 digits will be returned.
${\mathrm{Digits}}{\u2254}{100}$
 (101) 
${0.333333333333333315}$
 (102) 
Notice that in this example the result is only correct to 16 digits. In general, the results from evalhf are guaranteed to 15 digits of accuracy.
To find the number of guaranteed digits for your version of Maple, use evalhf(Digits):
In fact, evalhf is, despite superficial similarities, very different from evalf. The evalhf command uses a completely separate evaluation environment which uses only simple types rather that the Maple DAG types. This means that it can be very fast, but at the cost of being limited in the types of computations it can perform.
${\mathrm{Digits}}{\u2254}{15}$
 (104) 
${c}{\u2254}{1.00000000000000}{\times}{{10}}^{{14}}$
 (105) 
>

CodeTools:Usage( evalhf( add( (i+c), i=1..10^6) ) );

memory used=1.24KiB, alloc change=0 bytes, cpu time=20.00ms, real time=21.00ms, gc time=0ns
 
${1.00000000499999867}{\times}{{10}}^{{20}}$
 (106) 
>

CodeTools:Usage( ( add( (i+c), i=1..10^6) ) );

memory used=103.06MiB, alloc change=37.00MiB, cpu time=730.00ms, real time=629.00ms, gc time=210.60ms
 
${1.00000000500000}{\times}{{10}}^{{20}}$
 (107) 
${c}{\u2254}{1.00000000000000}{\times}{{10}}^{{14}}$
 (108) 
>

CodeTools:Usage( ( add( (i+c), i=1..10^6) ) );

memory used=51.02MiB, alloc change=12.00MiB, cpu time=547.00ms, real time=445.00ms, gc time=179.08ms
 
${1.00000000500001}{\times}{{10}}^{{20}}$
 (109) 
In particular evalhf only handles a specific list of functions. For the list of functions that evalhf recognizes, refer to the evalhf/fcnlist help page.
>

evalhf(sin(exp(gamma+2)+ln(cos(Catalan))));

${0.0980197901238379354}$
 (110) 
evalhf works with Arrays of hardware floats. It cannot handle symbols, lists, sets, and most other Maple data structures.
>

evalhf(map(t>t+1, [1, 2, 3, 4]));

To create an Array of hardware floats, you can use the option datatype=float[8], which specifies that the elements in the Array are 8byte hardware floats.
>

A := Array([1, 2, 3, 4], datatype=float[8]);

$\left[\begin{array}{cccc}1.0& 2.0& 3.0& 4.0\end{array}\right]$
 (111) 
>

evalhf(map(t>t+1, A));

$\left[\begin{array}{cccc}2.0& 3.0& 4.0& 5.0\end{array}\right]$
 (112) 
You can also create an Array that can be used by evalhf by using the constructor hfarray. Both constructors create an Array of hardware floats. The only difference is that hfarray defaults to C_order instead of Fortran_order.
>

A := hfarray(1..4, 1..4, (i,j)>ithprime(i)*isqrt(3*(i+j)));

$\left[\begin{array}{cccc}4.0& 6.0& 6.0& 8.0\\ 9.0& 9.0& 12.0& 12.0\\ 15.0& 20.0& 20.0& 25.0\\ 28.0& 28.0& 35.0& 35.0\end{array}\right]$
 (113) 
Array(1 .. 4,1 .. 4,{(1, 1) = 4., (1, 2) = 6., (1, 3) = 6., (1, 4) = 8., (2, 1)
= 9., (2, 2) = 9., (2, 3) = 12., (2, 4) = 12., (3, 1) = 15., (3, 2) = 20., (3,
3) = 20., (3, 4) = 25., (4, 1) = 28., (4, 2) = 28., (4, 3) = 35., (4, 4) = 35.}
,datatype = float[8],order = C_order)
 
Userdefined Maple procedures can be evaluated in the evalhf environment as long as they comply with the restrictions outlined in the evalhf/procedure help page.
>

SlowPower := proc(a,n) local i, s; s:=1; for i to n do s := a*s; end do; end proc;

${\mathrm{SlowPower}}{\u2254}{\mathbf{proc}}\left({a}{\,}{n}\right)\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{i}{\,}{s}{\;}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{s}{\u2254}{1}{\;}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{\mathbf{for}}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{i}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{\mathbf{to}}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{n}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{\mathbf{do}}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{s}{\u2254}{a}{\*}{s}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{\mathbf{end\; do}}\phantom{\rule[0.0ex]{0.5em}{0.0ex}}{\mathbf{end\; proc}}$
 (114) 
>

evalhf( SlowPower(2,10) );



Numerical Linear Algebra


Maple has access to many libraries for fast numeric computation such as BLAS, CLAPACK, and the NAG® C Library. To take full advantage of the speed provided by these commands, you need to provide them with Matrices and Vectors with the appropriate datatype.
For example, floatingpoint Matrix times Matrix products can been computed very quickly in the BLAS libraries and quickest dispatch to the BLAS commands will happen if the Matrices are created with datatype=float[8].
>

A := Matrix(5^3,5^3,(i,j)>(ij+1)/(i+j));

>

sz:=interface(rtablesize=[2,2]):

memory used=0.59GiB, alloc change=109.00MiB, cpu time=2.12s, real time=1.72s, gc time=653.40ms
 
>

Ahf := Matrix(5^3,5^3,(i,j)>(ij+1)/(i+j), datatype=float[8]);

>

CodeTools:Usage(Ahf^2);

memory used=320.91KiB, alloc change=0 bytes, cpu time=28.00ms, real time=52.00ms, gc time=0ns
 
Of course, many of the linear algebra commands will try to determine if you have a Matrix of low precision floats and will convert to the appropriate datatype automatically. In the next example, Af has datatype=anything, but the result of Af^2 has datatype=float[8] and requires only a small, but noticeable, copy and conversion overhead.
>

Af := Matrix(5^3,5^3,(i,j)>(ij+1.)/(i+j));

>

CodeTools:Usage(Af^2);

memory used=390.13KiB, alloc change=0 bytes, cpu time=5.00ms, real time=3.00ms, gc time=0ns
 
We recommend that you specify datatype=float[8] in your constructors explicitly if you intend to perform numeric computations. This makes the numeric nature of the Matrix explicit, and it makes it impossible to accidentally add nonfloat entries to a Matrix and thus make subsequent computations slower. An exception will be raised if nonnumeric entries are assigned into the Matrix.
${{\mathrm{Ahf}}}_{{1}{,}{1}}{\u2254}\sqrt{{3}}$
 (117) 
Other numeric types will be automatically converted to float[8].
${{\mathrm{Ahf}}}_{{1}{,}{1}}{\u2254}\frac{{15}}{{37}}$
 (118) 
${0.405405405405405}$
 (119) 
If a Matrix contains only floats, but does not have a datatype=float[8] restriction, then addition of symbolic elements results in the more expensive symbolic commands to be used.
${{\mathrm{Af}}}_{{1}{,}{1}}{\u2254}\sqrt{{3}}$
 (120) 
>

CodeTools:Usage(Af^2);

memory used=342.66MiB, alloc change=28.00MiB, cpu time=2.46s, real time=1.97s, gc time=653.02ms
 
>

interface(rtablesize=sz):

Another advantage of float[8] is that these Matrices are stored in the same way as an hfarray which is analogous to an array of floats in the C or Fortran programming languages and different from a Matrix of datatype=anything or datatype=sfloat which are arrays of Maple DAGs each of which will take more memory than a single 8byte float. Note the difference in memory used in the following two examples.
>

CodeTools:Usage(Matrix(10^3,3*10^3,(i,j)>10.^4*j+j, datatype=sfloat));

memory used=114.61MiB, alloc change=22.89MiB, cpu time=2.08s, real time=2.08s, gc time=1.35s
 
>

B1:=CodeTools:Usage(Matrix(10^3,3*10^3,(i,j)>10^4*j+i, datatype=float[8]));

memory used=23.04MiB, alloc change=22.89MiB, cpu time=345.00ms, real time=346.00ms, gc time=0ns
 
It is also important to note that elements extracted from a float[8] rtable will be of type hfloat and so hfloat contagion will apply subject to the current setting of UseHardwareFloats.
There are also many optimized commands for Matrices of complex hfloats. These Matrices can be created using the option datatype=complex[8], and work similarly to those of datatype=float[8].
If you are constructing very large Matrices in your programs, use the ArrayTools package to construct and copy Matrices as efficiently as possible.



7.5 Writing Efficient Numerical Programs


Two main points to keep in mind when trying to write efficient numerical programs are:
Try to use hardware floatingpoint arithmetic when Digits allows
Try to minimize memory usage where possible

Writing Flexible Numerical Procedures


You can use the evalhf(Digits) construct to determine whether hardware floatingpoint arithmetic provides sufficient precision in a particular application. If Digits is less than evalhf(Digits), then you can take advantage of the faster hardware floatingpoint calculations. Otherwise, you should use software floatingpoint arithmetic, with sufficient digits, to perform the calculation.
In the following example, the procedure myevalf takes an unevaluated parameter, expr. Without the uneval declaration, Maple would evaluate expr symbolically before invoking myevalf.
>

myevalf := proc(expr::uneval)
if Digits < evalhf(Digits) then
evalf(evalhf(expr));
else
evalf(expr);
end if;
end proc:

The evalhf command evaluates many Maple functions, but not all. For example, you cannot evaluate an integral using hardware floatingpoint arithmetic.
>

myevalf( Int(exp(x^3), x=0..1) );

You can improve the procedure myevalf so that it traps such errors and tries to evaluate the expression using software floatingpoint numbers instead.
>

myevalf := proc(expr::uneval)
if Digits < evalhf(Digits) then
try
return evalf(evalhf(expr));
catch:
end try;
end if;
return evalf(expr);
end proc:

>

myevalf( Int(exp(x^3), x=0..1) );

This procedure provides a model of how to write procedures that use hardware floatingpoint arithmetic whenever possible.
The myevalf procedure returns sfloats. A version that returns hfloats is easiest to write using the hfloat procedure option. This option will cause the procedure to use hfloat arithmetic as much as possible so long as digits less than 15. In particular it convert all floats in the procedure definition into hfloats, and causes evalhf to not convert its output to an sfloat.
>

myevalf := proc(expr::uneval)
option hfloat;
if Digits < evalhf(Digits) then
try
return evalhf(expr);
catch:
end try;
end if;
return evalf(1. * expr);
end proc:

The multiplication by $1.$ was added to the evalf return line to induce hfloat contagion causing the output to be an hfloat when possible.
>

type( myevalf( Int(exp(x^3), x=0..1) ), hfloat);

For more information on the hfloat option, see The hfloat Option or refer to the option_hfloat help page.


Example: Newton Iteration


This section illustrates how to take advantage of hardware floatingpoint arithmetic to calculate successive approximations using Newton's method. You can use Newton's method to find numerical solutions to equations. As Example: Creating a Newton Iteration describes, if ${x}_{n}$ is an approximate solution to the equation $f\left(x\right)\=0$, then ${x}_{n\+1}$, given by the following formula, is typically a better approximation.
${x}_{\left(n\+1\right)}\={x}_{n}\frac{f\left({x}_{n}\right)}{f\'\left({x}_{n}\right)}$
The procedure iterate takes a function, f, its derivative, df, and an initial approximate solution, x0, as input to the equation $f\left(x\right)\=0$. The procedure iterate calculates at most N successive Newton iterations until the difference between the new approximation and the previous one is small. The procedure prints the sequence of approximations to show successive approximations.
>

iterate := proc( f::procedure, df::procedure,
x0::numeric, N::posint, $ )
local xold, xnew;
xold := x0;
xnew := evalf( xold  f(xold)/df(xold) );
to N1 while abs(xnewxold) > 10^(1Digits) do
xold := xnew;
print(xold);
xnew := evalf( xold  f(xold)/df(xold) );
end do;
return xnew;
end proc:

The following procedure calculates the derivative of f and passes all the necessary information to iterate.
>

Newton := proc( f::procedure, x0::numeric, N::posint:=15, $ )
local df;
df := D(f);
print(x0);
return iterate(f, df, x0, N);
end proc:

Use Newton to solve the equation ${x}^{2}2\=0$.
${f}{\u2254}{x}{\mapsto}{{x}}^{{2}}{}{2}$
 (124) 
This version of Newton uses sfloats unless the arguments passed in are hfloats. If you add option hfloat to the procedure iterate, then hfloats are used automatically, provided the value of Digits is small enough.
>

iterate := proc( f::procedure, df::procedure,
x0::numeric, N::posint, $ )
option hfloat;
local xold, xnew;
xold := 1. * x0;
xnew := 1. * evalf( xold  f(xold)/df(xold) );
to N1 while abs(xnewxold) > 10^(1Digits) do
xold := xnew;
print(xold);
xnew := evalf( xold  f(xold)/df(xold) );
end do;
return xnew;
end proc:

Now the procedure Newton will return hfloats instead of sfloats when Digits is less than 15.
>

type( Newton(f, 1.5), hfloat);

In this case, the procedure is simple enough that we can go beyond option hfloat and use the evalhf command to achieve best performance. This next version of Newton uses evalhf for floatingpoint arithmetic if possible and reverts to sfloats otherwise. Since iterate only tries to find a solution to an accuracy of 10^(1Digits), Newton uses evalf to round the result of the hardware floatingpoint computation to an appropriate number of digits.
>

Newton := proc( f::procedure, x0::numeric, N::posint:=15, $ )
local df, result;
df := D(f);
print(x0);
if Digits < evalhf(Digits) then
try
return evalf( SFloat( evalhf(iterate(f, df, x0, N)) ));
catch:
end try;
end if;
return evalf( SFloat( iterate(f, df, x0, N) ) );
end proc:

Newton uses hardware floatingpoint arithmetic for the iterations and rounds the result to software precision. Hardware floatingpoint numbers have more digits than the software floatingpoint numbers, given the present setting of Digits.
Newton must use software floatingpoint arithmetic to find a root of the following Bessel function.
>

F := z > BesselJ(1, z);

${F}{\u2254}{z}{\mapsto}{\mathrm{BesselJ}}{}\left({1}{\,}{z}\right)$
 (128) 
Software arithmetic is used because evalhf does not recognize BesselJ and the symbolic code for BesselJ uses the type command and remember tables, which evalhf does not allow.
>

evalhf( BesselJ(1, 4) );

${\mathrm{0.0660433280235491332}}$
 (130) 
Using a trycatch block (as in the previous Newton procedure) allows the procedure to work when evalhf fails.
The previous Newton procedure prints many digits when it is trying to find a tendigit approximation. The reason is that the print command is located inside the iterate procedure, which is inside a call to evalhf, where all numbers are hardware floatingpoint numbers, and print as such.


Example: Jacobi Iteration


Jacobi iteration is an iterative method for numerically solving systems of linear equations that are diagonally dominant (meaning that the diagonal elements of the matrix representing the system are larger than the sum of all other elements in any given row of the matrix). Given an initial guess of $\mathrm{x0}$ for the solution to $A\cdot x\=b$, the process is: if ${x}_{k}$ is an approximation for the solution, then the next approximation is ${x}_{k\+1}\={S}^{\mathrm{1}}\cdot \left(bR\cdot {x}_{k}\right)$ where $S$ is the diagonal of $A$ and $A\=S\+R$.
The procedure Jacobi is a straightforward implementation of Jacobi iteration as it is usually presented in a numerical analysis course.
>

Jacobi := proc(A::Matrix(numeric), b::Vector(numeric), x0::Vector(numeric):=b, MAXIter::posint:=25, tolerance::positive:=evalf(LinearAlgebra:Norm(b,2)*10^(1Digits)), $)
local i,j,k, x_old, x_new, s, residual, n;
x_new := evalf(x0);
n := LinearAlgebra:Dimension(b);
x_old := Vector(n, 0, rtable_options(x_new));
residual := evalf(LinearAlgebra:Norm(A . x_newb,2));
for k to MAXIter while residual > tolerance do
ArrayTools:Copy(x_new, x_old);
for i from 1 to n do
s := 0;
for j from 1 to n do
if i<>j then
s := s + A[i,j] * x_old[j];
end if;
end do;
x_new[i] := (b[i]  s) / A[i,i];
end do;
residual := evalf(LinearAlgebra:Norm(A . x_newb,2));
end do;
if k < MAXIter then
return x_new;
else
WARNING("Residual %1 greater than tolerance %2 after %3 iterations", residual, tolerance, k1);
return x_new;
end if;
end proc:

Here we construct a random Matrix that is strongly diagonally dominant to test the procedure. Note that, while in practice Jacobi iteration would not be used on dense Matrices, we use dense Matrices in these examples to illustrate some efficiency principles.
>

sz:=interface(rtablesize=[2,2]):

>

M := Matrix(N,N,(i,j)>`if`(i<>j, RandomTools:Generate(integer(range=100..100))/1000., RandomTools:Generate(integer(range=100..10000))/10.),datatype=float);

>

b := LinearAlgebra:RandomVector(N,datatype=float);

>

CodeTools:Usage( Jacobi(M, b) );

memory used=0.65MiB, alloc change=0 bytes, cpu time=13.00ms, real time=14.00ms, gc time=0ns
 
The code is written in such a way that it will automatically work for software floats at higher values of digits.
>

M := Matrix(N,N,(i,j)>`if`(i<>j, RandomTools:Generate(integer(range=100..100))/1000., RandomTools:Generate(integer(range=100..10000))/10.),datatype=float);

>

b := LinearAlgebra:RandomVector(N,datatype=float);

>

CodeTools:Usage( Jacobi(M, b) );

memory used=18.66MiB, alloc change=37.00MiB, cpu time=148.00ms, real time=114.00ms, gc time=71.90ms
 
This implementation works well for small Matrices, but when the dimension becomes large, it becomes very slow.
>

M := Matrix(N,N,(i,j)>`if`(i<>j, RandomTools:Generate(integer(range=100..100))/1000., RandomTools:Generate(integer(range=100..10000))/10.),datatype=float);

>

b := LinearAlgebra:RandomVector(N,datatype=float);

>

CodeTools:Usage( Jacobi(M, b) );

memory used=142.14MiB, alloc change=16.00MiB, cpu time=2.15s, real time=1.84s, gc time=525.89ms
 
Adding option hfloat to Jacobi is not likely to increase performance, since hfloat contagion from the float[8] Matrix elements means that hfloat arithmetic is likely being used everywhere possible already. However, it is possible to rewrite the internal loops as a procedure that can be evaluated with evalhf. (It might be possible to rewrite all of Jacobi to be evaluatable to evalhf, but it would be difficult and potential gains would be modest.)
>

JacobiHelper := proc(A, b, x_old, x_new, n)
local s, i, j, l;
option hfloat;
# this procedure acts by side effect on x_new
for i from 1 to n do
s := 0;
for j from 1 to n do
if i<>j then
s := s + A[i,j] * x_old[j];
end if;
end do;
x_new[i] := (b[i]  s) / A[i,i];
end do;
end proc:

And the rest of the procedure with option hfloat.
>

Jacobi := proc(A::Matrix(numeric), b::Vector(numeric), x0::Vector(numeric):=b, MAXIter::posint:=25, tolerance::positive:=evalf(LinearAlgebra:Norm(b,2)*10^(1Digits)), $)
option hfloat;
local i,j,k, x_old, x_new, s, residual, n;
x_new := evalf(x0);
n := LinearAlgebra:Dimension(b);
x_old := Vector(n, 0, rtable_options(x_new));
residual := evalf(LinearAlgebra:Norm(A . x_newb,2));
for k to MAXIter while residual > tolerance do
ArrayTools:Copy(x_new, x_old);
# JacobiHelper acts by side effect on x_new
if Digits <= evalhf(Digits) then
evalhf( JacobiHelper(A, b, x_old, x_new, n) );
else
( JacobiHelper(A, b, x_old, x_new, n) );
end if;
residual := evalf(LinearAlgebra:Norm(A . x_newb,2));
end do;
if k < MAXIter then
return x_new;
else
WARNING("Residual %1 greater than tolerance %2 after %3 iterations", residual, tolerance, k1);
return x_new;
end if;
end proc:

>

CodeTools:Usage( Jacobi(M, b) );

memory used=133.28KiB, alloc change=0 bytes, cpu time=235.00ms, real time=235.00ms, gc time=0ns
 
Using evalhf here achieves an impressive speedup but you can achieve even better speed by taking advantage of the builtin Matrix and Vector operations. In general you code will be faster if you can replace nested loops with calls to external commands for Vectors or Matrices. Those commands will be highly optimized for your platform taking advantage of multiple cores and cache hierarchy where possible.
>

Jacobi := proc(A::Matrix(numeric), b::Vector(numeric),
x0::Vector(numeric):=b, MAXIter::posint:=25,
tolerance::positive:=evalf(LinearAlgebra:Norm(b,2)*10^(1Digits)), $)
local k, x_new, S, S_inv, residual;
x_new := evalf(x0);
S := LinearAlgebra:Diagonal(A,datatype=float);
S_inv := 1 /~ S;
residual := evalf(LinearAlgebra:Norm(A.x_newb,2));
for k to MAXIter while residual > tolerance do
# computing R.x as A.x  S.x is probably a bad idea numerically
# but we do it anyway to avoid making the code overly complicated
x_new := S_inv *~ (b  ( A . x_new  S *~ x_new));
residual := evalf(LinearAlgebra:Norm(A . x_newb,2));
end do;
if k < MAXIter then
return x_new;
else
WARNING("Achieved tolerance of only %1 after %2 iterations", residual, i1);
return x_new;
end if;
end proc:

>

CodeTools:Usage( Jacobi(M, b) );

memory used=261.17KiB, alloc change=0 bytes, cpu time=4.00ms, real time=4.00ms, gc time=0ns
 
This sort of speedup is typical. The builtin numerical linear algebra commands are easily an order of magnitude faster than loops run in Maple, and generally also faster than loops in evalhf.
>

interface(rtablesize=sz):




Contents Previous Next Index
