Optimization in Maple
Go to Maple Portal Maple Portal for Engineers
The Optimization package in Maple provides a collection of commands for numerically solving optimization problems, which involve finding the minimum or maximum of an objective function which could be subject to constraints. The package enables you to solve linear programs (LPs), quadratic programs (QPs), nonlinear programs (NLPs), and both linear and nonlinear least-squares problems. The package accepts a wide range of input formats including algebraic form, Matrix form and Maple programming (or operator) notation. More information on the input forms accepted by the Optimization package can be found in the InputForms help page.
This document will use several different examples to illustrate how Maple can solve linear programs, quadratic programs, non-linear programs, and least-square problems.
The commands in the Optimization package can be accessed by loading the package into Maple using the with command.
| (1) |
|
Solving Linear Programs
|
|
As mentioned earlier, the Optimization package can handle a wide variety of input formats. This section will show how linear programs can be solved for algebraic and Matrix input notations.
|
Algebraic Input Notation
|
|
In this example, we show how to enter and solve a linear program in algebraic notation.
Problem Description
Let's assume we want to:
Maximize the function
|
Subject to:
|
|
|
|
Solution
First, define the objective function:
| (2) |
Then define a set containing all the constraint equations:
| (3) |
For a two-dimensional linear programs, you can easily graph the feasible region, shown in blue below. The black lines indicate the constraints, and the red lines the contours of the objective function .
Using the Maximize command, find the solution that maximizes the objective function subject to the constraint equations. The first entry returned, 10.5, is the objective value at the optimal solution, while the second entry refers to a point, (4, 6.5), which is a solution point to the optimization problem. The point (4, 6.5) lies at the upper-right vertex of the feasible region.
| (4) |
Similarly, the Minimize command can be used to find the minimum solution.
| (5) |
Alternatively, you can use the LPSolve command to find the minimum and maximum values of the objective function. LPSolve is more flexible than the Maximize and Minimize commands in that it can also accept problems in Matrix format. By default, LPSolve minimizes. To maximize, include the option maximize. For more information on the LPSolve command, see LPSolve.
| (6) |
Finally, you can also solve the problem using the interactive Optimization Assistant. The Assistant, seen in the following figure, allows you to enter, edit, and solve (or re-solve) the problem by pointing and clicking, rather than typing commands. It is accessible through the Interactive command or through the Tools menu (ToolsAssistants) . See Optimization Interactive for information on using the Optimization Assistant.
In the assistant, choose Minimize or Maximize under Options, and then press Solve.
| (7) |
| (8) |
|
Solving the Problem with the Optimization Assistant
|
|
|
Note: The Optimization Assistant only accepts inputs of algebraic form.
|
|
Matrix Notation
|
|
Problem Description
The Diet Problem is a Linear Program Optimization problem inspired by Georg Danzig in the 1940's. The Diet Problem tries to find the cheapest diet that meets our daily nutritional needs given the nutritional values and unit costs of a set of foods.
Let's define the variable as the cost of 1000 different foods, where is the cost per unit weight of food .
| (9) |
Similarly, we define the variable , as a Matrix with dimension 600x1000, where is the amount of nutrients in a unit amount of food .
| (10) |
Lastly, we define the variable to be a vector which specifies our daily requirement of the 600 nutrients, where is the daily requirement of nutrient
| (11) |
Letting vector x be the daily amounts of the foods we choose to buy, the problem can be stated:
Solution
We solve the linear problem as follows:
solves min such that . To tell Maple we want we need to pass . See LPSolve (Matrix Form) for more information on using the Matrix input for for LPSolve.
| (12) |
Which elements of the solution are positive? Of the 1000 available foods, it looks like our diet will consist of a select few, which we hope are palatable.
=
|
| (13) |
|
|
Solving Non-Linear Programs
|
|
Problem Description
Find the point on the circle formed by the intersection of the unit sphere with the plane that is closest to the point (1, 2, 3).
Solution
This problem can be easily conceptualized by plotting the unit sphere and the equation defining the intersecting plane.
The objective function is the squared distance between a point (x, y, z) and the point (1, 2, 3).
| (14) |
The point (x, y, z) is constrained to lie on both the unit sphere and the given plane. Thus, the constraints are:
| (15) |
We minimize the objective function subject to the constraints using the NLPSolve command.
| (16) |
Thus, the minimum distance is 10.29, and the closest point is (-0.51, 0.17, 0.84).
|
|
Solving Quadratic Programs
|
|
Problem Description
The Markowitz model is an optimization model for balancing the return and risk of a portfolio. The decision variables are the amounts invested in each asset. The objective is to minimize the variance of the portfolio's total return, subject to the following constraints: (1) the expected growth of the portfolio is at least some target level, and (2) we don't invest more capital than we have.
Solution
Let:
The objective function is , which can be shown to be equal to .
If, for example, , we would try to:
Minimize the function
|
Subject to:
|
|
|
|
Suppose we have the following data.
Since this is a quadratic function of X, we can use quadratic programming to solve the problem.
| (17) |
| (18) |
| (19) |
Thus, the minimum variance portfolio that earns an expected return of at least 10% is . Asset 2 gets nothing because its expected return is -20% and its covariance with the other assets is not sufficiently negative for it to bring any diversification benefits.
For more information about the QPSolve command, see QPSolve.
|
|
Solving Least-Square Problems
|
|
Problem Description
Neural networks are machine-learning programs that can learn to approximate functions using a set of sample function values called training data. To generate an output, a neural network applies a smooth function (typically sigmoidal or hyperbolic tangent) to a linear weighting of the input data. To train the network, one assigns these weights so as to minimize the sum of squared differences between the desired and generated outputs over all data points in the training set. This is an optimization problem that can often be solved by least-squares techniques.
Solution
Let's assume we have sample training data set with five points. The first element of each point is the input, the second is the desired output.
| (20) |
Suppose that for a given an input value x, the network outputs , where and are the weights we wish to set. The residuals are the differences between these outputs and the desired outputs given in the training set:
| (21) |
Here, we extract elements of the list using the selection operation. Then, seq is used to create a sequence.
The objective function is the sum of squares of the residuals:
| (22) |
Now we solve the optimization problem with the LSSolve command. Note that we only have to pass Maple the list of residuals.
| (23) |
The weights needed to minimize the sum of squared differences between the desired and generated outputs for and are and .
|
Go to Maple Portal Maple Portal for Engineers
|