Grid Computing (Parallel Distributed Computing) in Maple 16
Distributed systems offer fantastic gains when it comes to solving large-scale problems. By sharing the computational load, you can solve problems too large for a single computer to handle, or solve problems in a fraction of the time it would take with a single computer. The Grid Computing Toolbox platform support has been extended for Maple 16, making it easy to create and test parallel distributed programs on some of the world's largest clusters.
MPICH2 is a high performance implementation of the Message Passing Interface (MPI) standard, distributed by Argonne National Laboratory (http://www.mcs.anl.gov/research/projects/mpich2/). The stated goals of MPICH2 are to:
Provide an MPI implementation that efficiently supports different computation and communication platforms including commodity clusters (desktop systems, shared-memory systems, multicore architectures), high-speed networks (10 Gigabit Ethernet, InfiniBand, Myrinet, Quadrics) and proprietary high-end computing systems (Blue Gene, Cray, SiCortex) and
Enable cutting-edge research in MPI through an easy-to-extend modular framework for other derived implementations.
The Grid Computing Toolbox for Maple 16 includes the ability to easily set up multi-process computations that interface with MPICH2 to to deploy multi-machine or cluster parallel computations.
Example: Monte Carlo Integration
Random values of x can be used to compute an approximation of a definite integral according to the following formula.
Area = ∫abfx ⅆx = limN→infinity1N∑i=1Nfrb−a
This procedure efficiently calculates a one-variable integral using the above formula where r is a random input to f.
A sample run using 1000 data points shows how this works:
approxint x2, x=1..3;
This can be computed exactly in Maple to show the above approximation is rough, but close enough for some applications.
→at 10 digits
A parallel implementation adds the following code to split the problem over all available nodes and send the partial results back to node 0. Note that here the head node, 0, performs the calculation and then accumulates the results from the other nodes.
Integrate over the range, lim, using N samples. Use as many nodes as are available in your cluster.
Note: In the following command, replace "MyGridServer" with the name of the head node of your Grid Cluster.
Grid:-LaunchparallelApproxint,x2, x = 1..3, numSamples = 107, imports='approxint',numnodes=16;
Execution times are summarized as follows. Computations were executed on a 3-blade cluster with 6 quad-core AMD Opteron 2378/2.4GHz processors and 8GB of memory per pair of CPUs, running Windows HPC Server 2008.
Number of Compute Nodes
Real Time to Compute Solution
1 (using serialized code)
The speedup is a measure of T1Tp
where T1 is the execution time of the sequential algorithm and Tp is the execution time of the parallel algorithm using p processes.
1 (using Grid)
The compute time in Maple without using MapleGrid is the first number in the table -- ~6 minutes. The rest of the times were using MapleGrid with a varying number of cores. The graph shows that adding cores scales linearly. When 23 cores are dedicated to the same example, it takes only 15.3 seconds to complete.
Grid package documentation
Download Help Document
What kind of issue would you like to report? (Optional)