
Computational Linear Algebra



The implementation of computational linear algebra is encapsulated in a new data structure, the rtable (for "rectangular table"); a new package, LinearAlgebra; and the incorporation of the high quality numerical algorithms package known as the NAG library. A significant piece of technology was developed which enables the NAG library routines to work tightly with the Maple kernel, allowing these routines to work at compiled program speed with arbitrary precision floatingpoint data.


The implementation of rtables is complete, as is the implementation of the basic LinearAlgebra package. While small additions to the functionality of the package may occur over time, the broad functionality intended for this package is now complete. However, the NAG library is very extensive, particularly in the area of linear algebra, including many routines designed to deal with special forms of data or to use algorithms designed for certain types of problems. The incorporation of these more specialized routines into the package is not yet complete.


When a routine in the LinearAlgebra package is invoked on numeric data (i.e., Matrices, Vectors, and scalars which are numeric, not symbolic, objects) which include at least one floatingpoint number, that routine will usually try to find a NAG routine to complete the operation (whether at hardware precision or arbitrary precision). Every routine in the package for which a NAG routine for the computation exists has been implemented with at least the base level NAG support. Usually, this means that this support is provided for rectangular Matrices with full, dense storage, stored in Fortran (column major) order. (See the Matrix and Vector commands for more details about how to override the various defaults). In some cases, no further special case NAG routines are yet included in the package. If a routine with only this level of support is invoked it will make copies of its input objects, as required, in order to invoke the appropriate NAG routine. You can determine if such a copy operation is required by setting an infolevel value. (See Efficient Numerical Linear Algebra for details.) As more of the NAG routines are incorporated into the LinearAlgebra package, such copying will be required less often, resulting in improved overall efficiency.



Numeric Computation



Maple 6 includes a thorough review and overhaul of its model of numeric computation, with the goal of ensuring that Maple's numeric system forms a natural extension to the arbitrary precision and exact arithmetic domains of the IEEE/754 and IEEE/854 standards. The review has been completed but a few parts of the revision work have yet to be done. Principal among these is the introduction of a new data structure to hold a hardware floatingpoint number.


The new rtable data structure (discussed above) is capable of holding hardware floats as entries. This specific functionality was already present in Maple V Release 4 as the hfarray structure. For Maple 6, extracting an entry from such an rtable entails the conversion to a software float. This is why, for example, if A is a Matrix with datatype float[8], then the test type(A,'Matrix'(float[8])) will return true, but hastype(A,float[8]) will return false. Similarly, the evalhf evaluator works with hardware floatingpoint numbers, but the result of a computation using this evaluator must again be converted to a software float on exit from the evalhf call. Our future plans include introducing an HFloat() object into Maple, with which a hardware floatingpoint number can be encapsulated and allowed to persist without conversion. For computationally intensive work, eliminating this conversion overhead should result in significant efficiency gains.


Maple 6 introduces an environment variable called UseHardwareFloats. Ultimately, the setting of this variable will determine the computation domain for all floatingpoint computation, but since the HFloat object has not yet been implemented, the value of this variable is used only in the context of computation with rtables. Specifically, if an rtable is created with datatype float, the UseHardwareFloats flag will be checked to see if the result should have datatype float[8] (meaning 8byte hardware floats; UseHardwareFloats = true) or datatype sfloat (meaning software floats; UseHardwareFloats = false). The flag is also used by the LinearAlgebra package routines to determine if a hardware precision NAG routine or the corresponding arbitrary precision NAG routine should be used to carry out a particular floatingpoint computation.


Maple 6 also introduces an environment variable called Rounding, which controls the rounding mode for floatingpoint computations. The IEEE/754 standard requires that the rounding mode be checked when determining the default value to return from an untrapped overflow or underflow event. For Maple 6, this functionality has not been implemented; rather, floatingpoint overflow events always produce some form of infinity as the default result, and floatingpoint underflow events always return 0. as the default value. (While it is straightforward to change the behavior itself, checking all the ramifications of implementing the IEEE/754 model is a more challenging task.)


One longstanding issue with Maple's numerics has to do with what has come to be known as "premature simplification". For example, consider the following computation sequence.

${x}{\u2254}{Float}{}\left({\mathrm{\infty}}\right)$
 (1) 

This should result in an invalid_operation event being signaled, returning the default value of Float(undefined), assuming no handler has been installed for this event. The result computed by Maple 6 is 0., which arises because Maple evaluates the numeric component (1/Float(infinity)) first, giving x * 0., and then replaces x * 0. with 0. without checking the value of x. This issue will be addressed in a future release.



Other



Many NAG library routines for LinearAlgebra have been incorporated into Maple 6. Work on including NAG routines for other types of numeric computation, such as optimization, differential equations, etc., is continuing.



