Fox' Gradient Search with Adaptive Stepsize
by Dr. William P. Fox (WFox@fmarion.edu) and Dr. William H. Richardson (WRichardson@fmarion.edu) Dept. of Math., Francis Marion University, Florence. Adaptation and minor improvements to Maple18 (Maple 2015) by Dr. Henk Koppelaar (Koppelaar.Henk@GMail.com)
This worksheet solves nonlinear optimization problems by the method of steepest ascent. Given a function and a current point , the search direction is taken to be the gradient of at . The step length is computed by line search, i.e. as the step length that maximizes along the gradient direction.
Algorithm for the Gradient Method of Steepest Ascent :
Step 1. Enter the function to be maximized f, the maximum number of iterations allowed n, starting point , and an accuracy tolerance t.
Step 2. Choose λ to maximize,
Step+ 3. Move to next point
Step 4 . If , return to Step 2. Otherwise, we can terminate. We may also terminate if the number of points found by iterating is greater than the number n of allowed iterations (in Step 1).
---------------------------------------------------------------------------- Starting point ( 0.6000, 4.0000) Iter Gradient Vector G magnitude of G x[k] Step Length 1, ( 6.8000, -12.8000) 14.4940 ( 0.6000, 4.0000), 0.1917 2, ( -0.7138, -0.3792) 0.8083 ( 1.9034, 1.5465), 1.2772 3, ( 0.1409, -0.2652) 0.3004 ( 0.9917, 1.0622), 0.1917 4, ( -0.0148, -0.0079) 0.0168 ( 1.0187, 1.0113) ---------------------------------------------------------------------------- Approximate Solution: ( 1.0187, 1.0113) Maximum Functional Value: 0.9998 Number gradient evaluations: 5 Number function evaluations: 4 ----------------------------------------------------------------------------
---------------------------------------------------------------------------- Starting point ( 1.0000, 1.0000) Iter Gradient Vector G magnitude of G x[k] Step Length 1, ( 47.0000, 105.0000) 115.0400 ( 1.0000, 1.0000), 0.0380 2, ( 32.7185, -14.6454) 35.8470 ( 2.7852, 4.9882), 0.0857 3, ( 10.2936, 22.9964) 25.1950 ( 5.5883, 3.7335), 0.0380 4, ( 7.1658, -3.2075) 7.8509 ( 5.9793, 4.6069), 0.0857 5, ( 2.2544, 5.0365) 5.5180 ( 6.5932, 4.3321), 0.0380 6, ( 1.5694, -0.7025) 1.7195 ( 6.6788, 4.5234), 0.0857 7, ( 0.4938, 1.1031) 1.2085 ( 6.8133, 4.4632), 0.0380 8, ( 0.3437, -0.1539) 0.3766 ( 6.8320, 4.5051), 0.0857 9, ( 0.1081, 0.2416) 0.2647 ( 6.8615, 4.4919), 0.0380 10, ( 0.0753, -0.0337) 0.0825 ( 6.8656, 4.5011), 0.0857 11, ( 0.0237, 0.0529) 0.0580 ( 6.8720, 4.4982), 0.0380 12, ( 0.0165, -0.0074) 0.0181 ( 6.8729, 4.5002), 0.0857 13, ( 0.0052, 0.0116) 0.0127 ( 6.8744, 4.4996), 0.0380 14, ( 0.0036, -0.0016) 0.0040 ( 6.8745, 4.5001), 0.0857 15, ( 0.0011, 0.0025) 0.0028 ( 6.8749, 4.4999), 0.0380 16, ( 0.0008, -0.0004) 0.0009 ( 6.8749, 4.5000) ---------------------------------------------------------------------------- Approximate Solution: ( 6.8749, 4.5000) Maximum Functional Value: 392.8125 Number gradient evaluations: 17 Number function evaluations: 16 ----------------------------------------------------------------------------
---------------------------------------------------------------------------- Starting point ( 0.5000, 0.5000) Iter Gradient Vector G magnitude of G x[k] Step Length 1, ( -0.6487, 1.3513) 1.4989 ( 0.5000, 0.5000), 0.2874 2, ( 0.4084, 0.1961) 0.4530 ( 0.3136, 0.8883), 1.7041 3, ( -0.2993, 0.6235) 0.6917 ( 1.0095, 1.2224), 0.2001 4, ( 0.1097, 0.0526) 0.1216 ( 0.9496, 1.3472), 0.7345 5, ( -0.0298, 0.0621) 0.0689 ( 1.0302, 1.3859), 0.1868 6, ( 0.0090, 0.0043) 0.0099 ( 1.0246, 1.3975) ---------------------------------------------------------------------------- Approximate Solution: ( 1.0246, 1.3975) Maximum Functional Value: 8.8277 Number gradient evaluations: 7 Number function evaluations: 6 ----------------------------------------------------------------------------