Application Center - Maplesoft

App Preview:

Comparison of Multivariate Optimization Methods

You can switch back to the summary page by clicking here.

Learn about Maple
Download Application




``

Comparison of Multivariate

Optimization Methods

``

© Sergey Moiseev 2006, smoiseev@kodofon.vrn.ru, Kodofon, Russia

Introduction

This application demonstrates the use of Maple to compare methods of unconstrained nonlinear minimization of multivariable function. Seven methods of nonlinear minimization of the n-variables objective function f(x1,x2,…,xn)  are analyzed:

 

1. 

Minimum search by coordinate and conjugate directions descent - original minimization method developed by the author (the Descent program);

2. 

Powell's method (the Powell program);

3. 

Modified Hooke-Jeeves method (the Hooke program);

4. 

Simplex Nelder-Meed method (the Nelder program);

5. 

Quasi-gradient method of minimum searching (the Qgrad program);

6. 

Random directions searching (the Random program);

7. 

Simulated annealing (the Anneal program).

 

All methods are direct searching methods; i.e. they do not require the objective function f(x1,x2,…,xn) to be differentiable and continuous. Optimization methods have been compared on the set of  21 test functions. Maple's optimization efficiency is compared with these programs.

 

Brief Description of Optimization Methods

Minimum Search By Coordinate And Conjugate Directions Descent

The method based on the idea of coordinates descent by all initial coordinates and when possible, the descent in the directions conjugated to initial coordinates. Let's recall that two directions  s[i] and s[j]  are conjugated if   s[i]^T*H*s[j] = 0, i <> j; 0 <= s[i]^T*H*s[j], i = j, where H - positive definite Hessian matrix: H = abs(diff(diff(f, x[i]), x[j])), i, j = 1 .. n . If the function f(x1,x2,…xn) is the n-dimensional positive defined quadratic form or n-dimensional paraboloid; its minimum is obtained for n moves in the direction of n various conjugated directions.

Schematically the computing procedure of algorithm implemented in the Descent program consists in the following rules. At the beginning, there is movement in the line of initial coordinates x1,x2,…,xn by turn. The local minimum min(x[i]^k)  obtained on the k-th iteration at movement in the line of  x[i] coordinate is maintained. Later, when finding the second minimum  min(x[i]^(k+m)) on the (k+m)-th iteration movement in the direction from the first minimum min(x[i]^k)  to the second one min(x[i]^(k+m))  is carried out at once. As both minimums min(x[i]^k)  and min(x[i]^(k+m))  have been obtained at the movement in the line of the same coordinate  x[i], the direction from min(x[i]^k)  to min(x[i]^(k+m))  and direction in the line of coordinate x[i]  is conjugate. If n>2 the third conjugate direction connecting two minimums, obtained at moving in the line of two identical previous conjugate directions is constructed. Thus the Descent program approximates function n variables by a sequence of two-dimensional and three-dimensional paraboloids.


If coordinates descent is unsuccessful, the additional evaluation of the objective function value in the point of  the minimum n-dimensional degenerate paraboloid is produced by which the objective function is approximated. This procedure allows considerably increasing reliability of the program.

Except the descent in the line of coordinate axes and conjugate directions in the program Descent the moving on curvilinear direction takes place. Curvilinear direction is obtained by parabolic approximation of three last coordinates of local minimums found at the movement by the last coordinate. Movement in curvilinear direction allows efficient passing narrow curved canyons, as in Rosenbrock function.

 

Powell's Method

In the Powell's method [1] the complete set of n conjugate directions by the following recurrent rule is under construction: the k-th conjugate direction coincides with the direction connecting two local minimums, obtained at movement along previous k-1 identical conjugate directions. If at movement in the k-th conjugate direction the function considerably decreases, the appropriate coordinate axis on this conjugate direction is replaced. Thus, unlike the Descent program, the Powell program approximates function n variables by a sequence of n-dimensional paraboloids during optimization.

 

The Modified Hooke-Jeeves Method

The Hooke-Jeeves method [1] is modification of the simplest coordinates descent method. First, k iterations of this method are carried out similarly to those of the coordinates descent method. After finding the local minimum s[k]  after the k iteration the search is carried out in the direction connecting initial approximation  s[0] and k approximation  s[k] (so-called pattern search).

The modification of the Hooke-Jeeves method performed by the mainly consists in the following: variable step of search and quadratic interpolation have been used at linear search on each coordinate. Besides increase of the probability the direction of pattern search coincides with direction conjugated with coordinate axes.

 

Simplex Nelder-Meed Method

When searching a minimum of n-variables function by the Nelder-Meed method [1] as trial values, the points located in the vertex of simplex (polyhedron) are taken. In each vertex of the simplex objective function values are calculated. Further, initial polyhedron is deformed by one of special procedures (reflection, expansion, compression or reductions) so that the vertex with a maximum function value to be transformed into a vertex with smaller function value. Deforming of the polyhedron proceeds so long as its size does not decrease up to the given tolerance. The barycentre of the final polyhedron are taken as approximation to a point of a minimum.

 

Quasi-Gradient Method of Minimum Search

The search direction of this method is directly opposite to the direction of a gradient vector of the function f(x1,x2,…,xn). As against the gradient method requiring knowledge of the first analytic derivatives of the objective function, derivatives is estimated approximately by meanse of difference derivation.

 

Random Directions Search

 Search directions of this method are random and independent of each other. If the direction is chosen unsuccessfully, another random direction gets out.

 

Simulated Annealing

Simulated annealing [2] is a random-search technique utilizing an analogy between the way in which a metal cools and freezes into a minimum energy crystalline structure (the annealing process). Simulated annealing's major advantage over other methods is an ability to avoid becoming trapped in local minimum. The algorithm employs a random search which not only accepts changes that decrease the objective function f, but also some changes increasing it. The algorithm starts with an initial feasible solution, which is set as the current solution. Randomly, a neighboring solution from the solution space is chosen, and its energy (objective function) f is compared to that of the current solution. If the energy is decreased, this neighbor solution is kept and set as the current solution. Otherwise, this solution is accepted with a probability that is calculated according to the stage the algorithm is in (we designate this stage via a variable T called "temperature"). The probability of accepting a worse solution is P = exp( - df / T ) where df is the difference between energy of the neighbor solution and the current solution. The temperature is managed by the cooling schedule that controls the execution of the algorithm, i.e. the temperature T decreases according to the formula T_new=Cooling*T_old , where 0<Cooling<1 - cooling factor.

 

 

Initialization

Optimization Programs

These programs are found in the start-up code of this Maple document; to view them in detail, please select Edit > Startup Code.

 

Minimization Function f(x1,x2,…,xn) by Coordinate And Conjugate Directions Descent

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum.

Step - initial search step.

epsilon<<Step*Shorten - tolerance to stop when ||Xmin-XOld ||<epsilon.

0<Shorten<1 - shortening step coefficient.  Recommended value: Shorten=0.15.

Nmax - maximum permissible number of  function evaluations. If  number of function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.

The arguments Step, epsilon, Shorten, Nmax can be missed. In this case default values are: Step:=1;epsilon:=10.^(-6);Shorten:=0.15;Nmax:=10000.

The global listlist Descent_Search_Path contains the search path coordinates.

 

Minimization Function f(x1,x2,…,xn) by Powell's Method

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum.

Step - initial search step.

epsilon<<Step*Shorten - tolerance to stop when ||Xmin-XOld ||<epsilon.

0<Shorten<1 - shortening step coefficient.  Recommended value: Shorten=0.01.

Nmax - maximum permissible number of function evaluations. If  number of function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.

The arguments Step, epsilon, Shorten, Nmax can be missed. In this case default values are: Step:=1;epsilon:=10.^(-6);Shorten:=0.01;Nmax:=10000.

The global listlist Powell_Search_Path contains the search path coordinates.

 

Minimization Function f(x1,x2,…,xn) by Modified Hooke-Jeeves Method

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum.

Step - initial search step.

epsilon<<Step*Shorten - tolerance to stop when ||Xmin-XOld ||<epsilon.

0<Shorten<1 - shortening step coefficient.  Recommended value: Shorten=0.15.

Nmax - maximum permissible number of  function evaluations. If  number of function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.

The arguments Step, epsilon, Shorten, Nmax can be missed. In this case default values are: Step:=1;epsilon:=10.^(-6);Shorten:=0.15;Nmax:=10000.

The global listlist Hooke_Search_Path contains the search path coordinates.

 

Minimization Function f(x1,x2,…,xn) by Nelder-Meed Method

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum.

Side - initial length of simplex rib.

epsilon<<Step*Compression - tolerance to stop when ||Xmin-XOld ||<epsilon.

0<Compression<1 - Compression coefficient.  Recommended value: Compression=0.5.

Reflection - Reflection coefficient.  Recommended value: Reflection=1.

Stretching>1 - Stretching coefficient.  Recommended value: Stretching=2.

Reduction>1 - Reduction coefficient.  Recommended value: Reduction=2.

Nmax - maximum permissible number of function evaluations. If  number of function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.
The arguments Side, epsilon, Compression, Reflection, Stretching, Reduction, Nmax can be missed.

In this case default values are: Side:=1;epsilon:=10.^(-6); Compression:=0.5; Reflection:=1; Stretching:=2; Reduction:=2; Nmax:=10000.

The global listlist Nelder_Search_Path contains the search path coordinates of simplex barycentre.

 

Minimization Function f(x1,x2,…,xn) by Quasi-Gradient Method

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum.

Step - initial search step.

epsilon<<Step*Shorten.  If step searcing<epsilon, the search stops.

0<Shorten<1 - shortening step coefficient.  Recommended value: Shorten=0.7.

Nmax - maximum permissible number of function evaluations. If  number of function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.

The arguments Step, epsilon, Shorten, Nmax can be missed. In this case default values are: Step:=1;epsilon:=10.^(-6);Shorten:=0.7;Nmax:=10000.

The global listlist Qgrad_Search_Path contains the search path coordinates.

 

Minimization Function f(x1,x2,…,xn) by Random Searching

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum.

Nrnd - number of random directions per iteration.

Step - initial search step.

epsilon<<Step*Shorten. If step searching<epsilon, the search stops.

0<Shorten<1 - shortening step coefficient.  Recommended value: Shorten=0.15.

Nmax - maximum permissible number of  function evaluations. If number of function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.

The arguments Nrnd, Step, epsilon, Shorten, Nmax can be missed.

In this case default values are:  Nrnd:=60;Step:=1;epsilon:=10.^(-6);Shorten:=0.15;Nmax:=10000.

The global listlist Random_Search_Path contains the search path coordinates.

 

Minimization Function f(x1,x2,…,xn) by Simulated Annealing Method

 

f - function or procedure of n-variables (n=nops(Xmin0)).

Xmin0 - list  of  n  initial approximation of the locate minimum (initial solution).

Trnd - number of neighbor random solutions  generated to calculate the initial temperature (T_initial=(fmin-fmax)/ln(0.8)).

0<Cooling<1 - cooling factor (the temperature T decreases according to the formula T_new=Cooling*T_old).

Step - initial  width of uniform distribution which used for selection random search step.

epsilon<<Step*Shorten. If Step<epsilon, the search stops.

0<Shorten<1 - shortening step coefficient.  Recommended value: Shorten=0.85.
Nrnd - number of random solutions that decrease the temperature (T_new=Cooling*T_old) and Step (Step_new=Shorten*Step_old).

Nmax - maximum permissible number of  function evaluations. If  number of  function evaluations exceed Nmax, the search stops.

Return list containing the minimum of the function f estimation, list of n arguments of minimum estimation, number iterations, number of function evaluations.

The arguments Trnd, Nrnd, Cooling, Step, epsilon, Shorten, Nmax can be missed.

In this case default values are:  Trnd:=10; Nrnd:=30; Cooling:=0.15; Step:=1;epsilon:=10.^(-6);Shorten:=0.95;Nmax:=10000.

The global listlist Anneal_Search_Path contains the search path coordinates.

 

Select Test Objective Function of n-Variables  

Select a test function by clicking the button (NB: The results in this document are from selecting #3):

 

1: n=1 2: n=2 (Quadratic function) 3: n=2 (Rosenbrock function)

[-1.2, 1]

proc (x1, x2) options operator, arrow; 100*(x2-x1^2)^2+(1-x1)^2 end proc

f(Xmin0) = 24.2000

4: n=2 5: n=2 6: n=2 (Function with serpentine curved canyon) 7: n=2 8: n=2 (Beale's function) 9: n=2 (Non-differentiable function) 10: n=2 (Function with narrow curved canyon and bad scaling) 11: n=2 (Function with 4 global minimums) 12: n=2 (Non-differentiable author's function) 13: n=3 14: n=3 (Least-modules method of estimation) 15: n=3 (Spiral descent) 16: n=4 (Wood's function) 17: n=4 18: n=4 (Least-squares method of estimation) 19: n=5 (Discontinuous plateau function) 20: n=n (n-Dimensional positive defined quadratic form; n=1,2,3...)

21: n=n (Discontinuous function with multiple local minimums; n=1,2,3...)

 

Workplace for Comparison of Optimization Methods

Minimization

Xmin0

Xmin

fmin

Iter

Nf

Initial point ( List of n initial approximation of the locate minimum)

Argument of the minimum estimation

Minimum of the function f estimation

Number of  iterations

Number of  function evaluations

 

Assign a n-dimension initial point Xmin0:

Xmin0 := Xmin0

[-1.2, 1]

(3.1.1)

NULL

Minimization with the Default Parameters (there is comparison with Maple's Optimization)

 

Compare

"Exact solution: fmin=0  Xmin=[1, 1]"

Program:=[fmin Xmin Iter Nf]

[0.41034537844e-25, [.999999999999814588, .999999999999637335], 9, 102]

[0.1129e-32, [.999999999999999973, .999999999999999948], 19, 157]

[0.341395219509247027e-11, [.999998154617360801, .999996300010391653], 42, 421]

[.134084562785873727, [.636084680020189064, .400541451492369186], 93, 271]

[0.203142035701066477e-3, [.985767710065917775, .971661561002864813], 2205, 10003]

[0.105998799407793578e-10, [1.00000289960609351, 1.00000565116099717], 8, 1111]

[0.488533540893646475e-2, [1.06984978064166828, 1.14483041725412132], 86, 2591]

"Maple Optimization"

MAPLE:= [fmin [ x1=Xmin[1], ... ,xn=Xmin[n] ]]

[0.914332635842622784e-16, [x1 = 1.00000000515292872, x2 = 1.00000000950037259]]

 

Minimization continuation  starting  from  new initial  point  Xmin0=Xmin 

 

Xmin0 = Xmin0

[0.34377613344e-25, [.999999999999814588, .999999999999629182], 5, 41]

[0.1129e-32, [.999999999999999973, .999999999999999948], 9, 44]

[0.341395219509247027e-11, [.999998154617360801, .999996300010391653], 5, 26]

[0.607633723285523997e-6, [.999252617193034350, .998483645131311850], 45, 135]

[0.226791577811658271e-7, [.999849502442300099, .999699571987120673], 2174, 9753]

[0.476548738711452235e-12, [1.00000056479900706, 1.00000108990566121], 8, 963]

[0.154382356833687598e-14, [.999999981696232606, .999999966869239436], 86, 2591]

[0.232788737746418900e-16, [x1 = 1.00000000482094830, x2 = 1.00000000966121792]]

 

Minimization with User-Defined Parameters

Parameters For Minimization Programs

Step

Initial search step

eps

Tolerance

short

Shortening step coefficient (recomended value: Descent  0.15, Powell  0.01, Hook  0.15, Qgrad  0.7 Random  0.15)

Side

Initial length of simplex rib (only for Nelder)

Compression

Compression coefficient(only for Nelder, recomended value 0.5)

Reflection

Reflection coefficient(only for Nelder, recomended value 1)

Stretching

Stretching coefficient(only for Nelder, recomended value 2)

Reduction

Reduction coefficient(only for Nelder, recomended value 2)

Nrnd

Number of random directions (only for Random)

Trnd

Number of random solutions generated to calculate the initial temperature (only for Anneal)

NrndA

Number of random solutions that decrease the temperature and Step (only for Anneal)

Cooling

Cooling factor(only for Anneal)

Nmax

Maximum permissible number of function evaluations.

 

Warning!  To improve reliability one can increase the Shortening step coefficient for nondifferentiable functions and decrease it for differentiable functions.

 

Assign parameters

 

.5

0.100000000000000000e-5

0.100000000000000000e-4

0.100000000000000000e-6

0.100000000000000000e-9

0.100000000000000000e-6

0.100000000000000000e-6

0.100000000000000000e-6

.15

0.1e-1

.15

.7

.15

.85

.5

.5

1

2

2

60

10

30

.15

10000

Listlist of the Search Paths Coordinates

List of lists of  n search path coordinates: [[x1_0, x2_0,.., xn_0], [x1_1, x2_1,.., xn_1],..., [x1_k, x2_k,.., xn_k]],

n - number of variables, k - number of search directions.

 

NB: Very large output - not shown

Plot of Search Paths

Plots  for  x1, x2  out of  n search path coordinates.  

 

Plots

NULL

Optimization Efficiency (for  four  best   Descent, Powell, Hooke and Nelder programs)

Computation Time and Accuracy of  Minimization with the Default  Parameters (there is comparison with Maple Optimization package)

 

Repetition_Number is the number of minimization repetitions. The more Repetition_Number the better computation time estimation.

 

Calculate

10


Function minimum estimations
Descent:   4.10e-26  
Powell:    1.13e-33  
Hooke:     3.41e-12  
Nelder:    1.34e-01  
Maple: 9.14e-17  

Time minimization (s)
Descent:   0.266     
Powell:    0.250     
Hooke:     0.281     
Nelder:    0.328     
Maple: 0.438     

Number of function evaluations
Descent:    102
Powell:     157
Hooke:      421
Nelder:     271

Time equivalent number of function evaluations for Maple:    168

 

 

Assessing the optimization parameters:

 

Parameters

1

0.100000000000000000e-5

0.100000000000000000e-5

0.100000000000000000e-5

0.100000000000000000e-5

.15

0.1e-1

.15

1

.5

1

2

2

5000

 

Optimization Efficiency as a Function of Initial Point (there is comparison with Maple's Optimization)

X0_min

X0_incr

X0_number

Side

max_fmin

First initial point

Incrementation of initial point

Number of initial points

Step for Nelder

Permissible maximum value for function minimum (for the purpose of reliability evaluation)

 

X0_min := Xmin0; X0_incr := .5; X0_number := 40; max_fmin := 0.1e-2

[-1.2, 1]

(3.2.2.1)

NULL

Calculate


Reliability (percentage of number of function evaluations when fmin<max_fmin)
Descent:   100.00%
Powell:     87.50%
Hooke:     100.00%
Nelder:     65.00%
Maple:     100.00%

Median of function minimum estimations
Descent:   1.60e-21  
Powell:    2.32e-30  
Hooke:     3.90e-10  
Nelder:    5.97e-07  
Maple:     1.35e-15  



Average number of function evaluations
Descent:    184
Powell:     281
Hooke:      865
Nelder:     327

Time equivalent for Maple:     473

 

Optimization Efficiency as a Function of Initial Search Step

Step_min

Step_max

Step_incr

Side

max_fmin

Minimum initial search step

Maximum initial search step

Incrementation of initial search step

Step for Nelder

Permissible maximum value for function minimum (for the purpose of  reliability evaluation)

 

Step_min := .1; Step_max := 10; Step_incr := .1; max_fmin := 0.1e-2

NULL

Calculate


Reliability (percentage of number of function evaluations when fmin<max_fmin)
Descent:   100.00%
Powell:    100.00%
Hooke:     100.00%
Nelder:     96.00%

Median of function minimum estimations
Descent:   2.68e-23  
Powell:    1.01e-32  
Hooke:     1.30e-10  
Nelder:    3.95e-07  


Average number of function evaluations
Descent:     85
Powell:     114
Hooke:      344
Nelder:     265

 

Optimization Efficiency as a Function of Tolerance

Eps_max

Eps_min

Eps_incr

Step

Maximum tolerance

Minimum tolerance

Multiplier of tolerance

Initial search step (Side=Step for Nelder)

 

Eps_max := .1; Eps_min := evalf(10^(-15)); Eps_mult := .5; Step := .5
NULL

NULL

Calculate

NULL

Optimization Efficiency as a Function of Digits

Digits_min

Digits_max

Digits_incr

Step

Eps

Minimum Digits

Maximum Digits

Incrementation of Digits

Initial search step (Side=Step for Nelder)

Tolerance

 

Digits_min := 5; Digits_max := 34; Digits_incr := 1; Step := .5; Eps := 10.^(-6)

NULLNULL

Calculate

18

NULL

Optimization Method Outcomes Comparison

Optimization methods have been compared on the set of 21 test functions mainly taken from [1].

Comparison of various methods of unconstrained nonlinear minimization of multivariable function is difficult because of effects of deterministic chaos during optimization. Even insignificant variation of optimization parameters can cause significant variation of number of objective function evaluations and accuracy of the minimum value approximation. Therefore for the comparison it is necessary to use averaged performances of optimization efficiency.

We'll characterize optimization efficiency by number of objective function evaluations Nf and accuracy of objective function minimum evaluation Df(xmin). Test outcomes are listed in Table 1. The initial searching step varies from 0.02 up to 4 with 0.02 increment. The columns Nf show number of objective function evaluations step-averaged. The columns Df(xmin) show the median of a difference between the achieved minimum of the objective function and the real minimum f(xmin). Dash means failure in optimization.

 

Table 1. Number of Function Evaluations Nf  and Accuracy of Approximated Minimum Values Df(xmin)

f(x)

InitialPoint

Descent

Powell

Hooke

Nelder

Qgrad

Random

Anneal

Maple

Nf

DF(xmin)

Nf

DF(xmin)

Nf

DF(xmin)

Nf

DF(xmin)

Nf

DF(xmin)

Nf

DF(xmin)

Nf

DF(xmin)

Nf

DF(xmin)

1

10

26

9*10^(-7)

39

2*10^(-8)

28

7*10^(-7)

133

10^(-7)

42

6*10^(-6)

37

5*10^(-6)

2291

8*10^(-9)

-

-

2

(8,9)

21

9*10^(-34)

21

10^(-50)

31

10^(-34)

235

5*10^(-11)

253

4*10^(-17)

233

3*10^(-13)

2019

4*10^(-16)

107

10^(-13)

3

(-1.2, 1)

112

4*10^(-21)

169

3*10^(-28)

373

10^(-11)

333

5*10^(-11)

1777

6*10^(-8)

3638

5*10^(-11)

4011

2*10^(-7)

215

9*10^(-17)

3

(-2.547, 1.489)

102

10^(-20)

163

3*10^(-28)

332

10^(-11)

391

5*10^(-11)

19490

4*10^(-7)

2941

10^(-11)

10031

8*10^(-10)

236

3*10^(-14)

4

(0.211, 3.505)

69

3*10^(-17)

60

5*10^(-34)

112

10^(-15)

253

5*10^(-11)

499

10^(-11)

774

10^(-13)

1911

2*10^(-17)

75

3*10^(-18)

5

(-2,8)

50

10^(-21)

38

3*10^(-32)

61

3*10^(-13)

277

5*10^(-11)

136

2*10^(-11)

782

9*10^(-13)

4761

3*10^(-18)

55

2*10^(-16)

6

(-1.5, 3.5)

157

6*10^(-23)

164

9*10^(-25)

429

10^(-9)

381

6*10^(-11)

36111

2*10^(-6)

3776

10^(-12)

8831

4*10^(-10)

325

10^(-19)

6

(0.248, -3.082)

221

2*10^(-20)

224

2*10^(-26)

516

2*10^(-11)

445

6*10^(-11)

31179

3*10^(-6)

24211

4*10^(-10)

10011

8*10^(-13)

166

2*10^(-15)

7

(2,2)

86

2*10^(-13)

121

6*10^(-31)

73

3*10^(-10)

185

3*10^(-11)

172347

9*10^(-4)

1641

4*10^(-10)

1911

10^(-9)

238

5*10^(-25)

8

(0.5, -0.5)

64

6*10^(-16)

62

10^(-30)

104

7*10^(-14)

205

5*10^(-11)

118

5*10^(-2)

11161

6*10^(-12)

1911

3*10^(-17)

110

10^(-18)

8

(8, 0.8)

151

7*10^(-16)

102

2*10^(-29)

222

3*10^(-13)

253

4*10^(-11)

1686

3*10^(-4)

34148

10^(-12)

50111

2*10^(-13)

146

5*10^(-19)

9

(-2,2)

100

10^(-11)

-

-

40

4*10^(-10)

323

10^(-10)

-

-

1748

6*10^(-6)

2861

2*10^(-7)

-

-

10

(-1,1)

135

6*10^(-14)

220

6*10^(-14)

1946

5*10^(-12)

624

5*10^(-11)

-

-

-

-

-

-

378

6*10^(-14)

11

(1,1)

72

2*10^(-14)

53

9*10^(-29)

84

2*10^(-12)

229

4*10^(-11)

336

10^(-11)

213

10-11

1911

6*10^(-16)

101

2*10^(-14)

12

(1,-1)

561

5*10^(-10)

-

-

1003

10^(-3)

1862

9*10^(-8)

-

-

9976

9*10^(-7)

1911

9*10^(-5)

-

-

13

(-1.2,2,0)

317

2*10^(-16)

162

3*10^(-24)

433

10^(-10)

406

6*10^(-11)

4360

5*10^(-1)

1042

2*10^(-6)

4761

4*10^(-17)

680

3*10^(-11)

14

(95,15,2)

256

8*10^(-7)

270

5*10^(-8)

371

6*10^(-6)

549

2*10^(-6)

675

3*10^(-8)

2157

6*10^(-2)

-

-

-

-

15

(-1,-1,-1)

205

6*10^(-21)

121

2*10^(-25)

382

6*10^(-14)

442

6*10^(-11)

-

-

14423

4*10^(-7)

9511

7*10^(-15)

253

3*10^(-16)

16

(3,-1,0,1)

345

2*10^(-14)

264

8*10^(-22)

1516

7*10^(-7)

1005

9*10^(-9)

24182

3*10^(-7)

-

-

-

-

516

8*10^(-13)

17

(1,1,1,1)

371

10^(-11)

356

3*10^(-20)

421

3*10^(-9)

493

7*10^(-9)

3881

3*10^(-5)

11202

2*10^(-4)

14711

2*10^(-6)

343

3*10^(-12)

18

(2.7,90,1500, 10)

654

5*10^(-11)

296

10^(-13)

4336

5*10^(-8)

918

10^(-7)

-

-

-

-

-

-

627

9*10^(-11)

19

(-1,-1,-1,-1,-1)

-

-

-

-

-

-

-

-

-

-

-

-

-

-

-

-

20

(8,8,8,...) n=10

573

4*10^(-15)

313

2*10^(-48)

499

3*10^(-14)

6326

10^(-9)

1867

10^(-14)

1355

10^(-4)

3191

9*10^(-15)

85

4*10^(-15)

20

(8,8,8,...) n=50

3264

5*10^(-15)

18391

10^(-31)

2919

2*10^(-15)

-

-

-

-

-

-

-

-

204

3*10^(-15)

21

(8,8,8,...) n=10

280

10^(-37)

102

10^(-43)

186

10^(-35)

-

-

-

-

-

-

-

-

330

10^(-16)

 

Nf* -  Time equivalent number of function evaluations for Maple.

 

Note that the Qgrad, Random and Anneal programs give unsatisfactory outcomes in the majority of test functions. Efficiency of the Descent, Powell, Hooke and Nelder programs is considerably better than that of the above mentioned ones.

The programs can be ranked by three efficient performances such as reliability, quickness and accuracy. Reliability is number of objective function evaluations at which the value of the reached minimum is less than the given magnitude in percentage to total number of evaluations. Quickness is number of objective function evaluations with the accuracy not less than the given one. Accuracy is difference between the achieved minimum of the objective function and the real minimum. As ranking depends on the test function performances all test functions have been divided to two classes - differentiable and nondifferentiable. Tables 2 and 3 show the programs top-down ranking from the best to the worst.