Linear Optimization with the Optimization Package Matrix Form
This worksheet introduces the matrix form of the linear optimization solver LPSolve in the Optimization package. For an introduction to the algebraic form please refer to the Optimization example worksheet.
In this worksheet, we will be minimizing a linear function (referred to as the objective function) subject to a number of linear constraints. Certain maximization examples are included as well. In many of the examples, the maximize option can be added to the command to find the maximum instead of the minimum.
Linear Optimization Primer
A linear optimization problem, often called a linear program or simply an LP, is an optimization problem of the form
minimize
c' .x
(objective function)
subject to:
A.x ≤ b
(linear inequality constraints)
Aeq.x = beq
(linear equality constraints)
bl ≤ x ≤ bu
(bounds)
where x is the vector of problem variables; c, b, beq, bl, and bu are vectors; and A and Aeq are matrices. The relations involving matrices and vectors are element-wise. In the objective function defined, c' refers to the vector transpose.
The standard form for linear programs has no linear inequality constraints (A = 0, b = 0) and uses the bound x ≥ 0 (that is, we let bli=0 and bui=∞ for all i). The standard form LP is given by
c'. x
A.x = b
x ≥ 0
(nonnegativity bounds)
Another common form for linear optimization problems is the symmetric form. The symmetric form is a maximization problem that has no linear equality constraints (Aeq=0, Beq=0) and uses the bound x ≥ 0.
maximize
While the symmetric form only has linear inequality constraints it should be noted that the algorithms to solve the problem will generally convert the inequality constraints to equality constraints by adding slack variables. For example, the inequality constraints
2x + 3y ≤4
3 x − 5y ≤ 9
where
x,y ≥ 0
are converted into equality constraints by adding two new nonnegative variables u and v.
2x + 3y + u = 4
3 x − 5 y +v = 9
u,v, x,y ≥ 0
For each inequality constraint in the optimization problem a slack variable is introduced in order to convert that constraint to an equality constraint. When solving a symmetric form optimization problem the number of variables using in actually solving the problem is double the number in the problem statement. All of the constraints of the problem, including the bounds, define a feasible region for the problem. The feasible region for a linear optimization problem is always a convex polytope, which may or may not be bounded. Since the objective function is also linear, it follows that for any linear optimization problem if there is an optimal solution then either there exists a unique optimal solution or there exist an infinite number of optimal solutions. If there is a unique optimal solution it is a corner point of the feasible region. In this case, the line defined by setting the objective function equal to the optimal value intersect the feasible region at a single point. When there is more than one optimal solution, then the line defined by setting the objective function equal to the optimal value must intersect the feasible region at least two points.However, since the feasible region is a convex polytope, this intersection must be a line or a face of the polytope. In this case, it must intersect the feasible region at an infinite number of points.
A Simple Example
Consider the two variable linear optimization problem written algebraically:
min −2 x−3 y
subject to −7 x +4 y≤2.2
5 x + y ≤5
x, y ≥ 0.
The set of constraint inequalities, including the non-negativity inequalities, are given by
restart:cnsts≔−7 x +4 y≤2.2, 5 x + y≤5,0≤x,0≤y;
cnsts≔−7⁢x+4⁢y≤2.2,5⁢x+y≤5,0≤x,0≤y
The set of points that satisfy all of these constraint inequalities defines the feasible region of the problem. The feasible region for this example is shown in the shaded region (including the borders) in the plot below.
withplots: feasibleRegion≔inequalcnsts,x=−0.5..2,y=−0.5..2,optionsexcluded=color=white:displayfeasibleRegion
The objective function of the problem is given by
obj≔−2 x−3 y
obj≔−2⁢x−3⁢y
Some contours of the objective function are shown in the following plot along with the location of a point in the feasible region that minimizes the objective function.
contours ≔ contourplotobj,x=−0.5..2,y=−0.5..2: optimalPoint ≔ pointplot0.659295,1.7037,symbolsize=22,symbol=solidcircle, color=Niagara_Red: # optimal pointdisplayfeasibleRegion,contours,optimalPoint
The example problem can be written in matrix form using the following vectors and matrix:
c ≔ −2, −3;A ≔ −7,5|4,1; b ≔ 2.2,5;
−2−3
−7451
2.25
The Vector c corresponds to the objective function obj, the Matrix A is the inequality constraint matrix and holds the coefficients of the inequality constraint functions, and the Vector b corresponds to the constants on the right-hand side of the constraint inequalities.
To solve this problem we use the LPSolve function of the Optimization package. This function will output the optimal value (minimal value of the objective function) followed by a point in the feasible region that achieves this optimal value. The function accepts the LP in either matrix form or algebraic form and is called as follows.
withOptimization:solAlg ≔ LPSolveobj,cnsts;solMatrix ≔ LPSolvec,A,b;
solAlg≔−6.42962962962963,x=0.659259259259259,y=1.70370370370370
−6.42962962962963,0.65925925925925921.7037037037037037
Methods for Solving Linear optimization Problems
Linear optimization solvers are typically based on one of three techniques: the simplex method, an active set method or an interior point method. While these methods arrive at a solution in different ways they are all iterative methods. The algorithms progress by computing a sequence of successively better (or no worse) solutions until either an exact optimal solution is found or a good enough solution is found. A "good enough" solution is defined by acceptable tolerances in floating-point numbers.
The first technique that we consider is the simplex method. Invented in 1947 by George Dantzig, the simplex method uses the corner points of the feasible region as its iterates. At each iteration of the algorithm, a neighboring corner point is chosen such that the objective value either improves or does not change. When the algorithm determines that no further step can improve the objective value it terminates with the current corner point as an optimal solution.
Consider the two dimensional feasible region in the shaded region of the following plot.
restart:withplots:cnsts≔−7 x +4 y≤2.2,2 x + y≤5,x + 2 y ≥ 0.75, x−y≤2,0.1 x+ y ≤2,0≤x,0≤y;feasibleRegion≔inequalcnsts,x=−0.5..3,y=−0.5..2.5,optionsexcluded=color=white:displayfeasibleRegion;
cnsts≔−7⁢x+4⁢y≤2.2,2⁢x+y≤5,0.75≤x+2⁢y,x−y≤2,0.1⁢x+y≤2,0≤x,0≤y
If the objective function for a maximization problem is given by
obj ≔ −y;
obj≔−y
Then the optimal solution should be the point in the feasible region with greatest y value. In a simplex-based method, an initial feasible point (corner point) is first constructed. Suppose we have the initial point
initialPoint ≔ 0.75,0.0
initialPoint≔0.75,0.
A simplex-based method will look at all neighboring corner points (relative to the current point) and decide which point gives the best improvement in the objective function. If the objective function evaluated at the current point is strictly better than all neighboring corner points then the current point is an optimal point. The algorithm can terminate with the current point. Otherwise, a neighbor which improves the objective value the most (which might include a change of zero) is chosen and the current point is updated to this point.
In this example, three new points need to be considered before an optimal solution is found. The sequence of points is shown in the following plot.
p1 ≔ 0.75, 0.0: p2 ≔ 0.0,0.375: p3 ≔ 0.0, 0.55: p4 ≔ 0.7837837838,1.921621622: p5 ≔ 1.578947368,1.842105263:points ≔ pointplotp1,p2,p3,p4, symbolsize = 25, color=Niagara_Red, symbol=solidcircle: lines ≔ plotp1,p2,p3,p4, style=line, thickness=3 :labels ≔ textplot1,0.2, starting point, 1.1,1.7, optimal point, 0.5, 0.375, second point,0.42, 0.575, third point, color=White:displaypoints, feasibleRegion,labels,lines
In this example there was a unique solution because the objective function was maximized at a single point in the feasible region. This does not need to be the case. If the objective function was instead parallel to one of the edges of the feasible region then there are infinitely many solutions. For example, consider the same feasible region with the object function
obj2 ≔ 0.1 x+ y;
obj2≔0.1⁢x+y
In the following plot, the green line shows the contour of the objective function when it is maximized. The two red points are corner points that are optimal solutions to the maximization problem and every point on the green line between these two points are also optimal. A simplex-based method will return one of the two corner points as the solution to the problem.
obj2line ≔ plot−0.1 x+ 2, x=−0.5..3, thickness = 3, color=Niagara_Green:opt2points ≔ pointplotp4,p5, symbolsize = 25, symbol=solidcircle, color=Niagara_Red:displayfeasibleRegion, obj2line,opt2points
In higher dimensions, the contour of the objective function when it is maximized might intersect the feasible region at a single point, along an edge (with infinite optimal solutions) or along a face of the polytope (again having infinite optimal solutions). In higher dimensions a simplex-based method always find a corner point as an optimal solution if one exists.
In an active set method, each iteration considers points that are solutions of a subset of the constraints called the active set. The active set of a given iteration is the current estimation of the active constraints of the problem at an optimal solution. In each iteration a new point x' = x + Dx is computed from the current point x by finding a step direction Dx. The step direction is determined so that the change in objective value from x to x' is optimized while enforcing that the active set is active for the point x'. Thus, the point x' satisfies the current active set with equality.
In each iteration, the active set is also modified until the algorithm finds the active set for an optimal solution.
When the number of constraints is greater than the number of variables in a linear optimization problem, the active set method will consider active sets of size equal to the number variables.
In an interior point method, each iteration considers points that are strictly inside the feasible region, with the exception of the initial point which may be outside the feasible region. In each iteration a new point x' = x + a Dx is computed from the current point x by finding a step direction Dx. The step direction is chosen to optimize the change in objective value from x to x'. The parameter a is used to ensure that the new point x' lies strictly inside the feasible region.
For example, using the same feasible region and the same objective function as the simplex method example above, the iterates of an interior point method might look like something in the following plot. Here, the initial point is infeasible.
restart:withplots:cnsts≔−7 x +4 y≤2.2,2 x + y≤5,x + 2 y ≥ 0.75, x−y≤2,0.1 x+ y ≤2,0≤x,0≤y;feasibleRegion≔inequalcnsts,x=−0.5..3,y=−0.5..2.5,optionsexcluded=color=white:q1 ≔ 2.6, 0.3: q2≔ 1.75, 0.1: q3≔ 0.8, 0.5: q4≔ 0.7, 1.1:q5≔ 0.85,1.5:q6 ≔ 1.,1.7: q7 ≔ 1.07, 1.8: q8≔ 1.15,1.85:obj2line ≔ plot−0.1 x+ 2, x=−0.5..3, color = Niagara_Green, thickness = 3:points ≔ pointplotq1,q2,q3,q4,q5,q6,q7,q8, color = Niagara_Red, symbolsize = 25, symbol=solidcircle: lines ≔ plotq1,q2,q3,q4,q5,q6,q7,q8, color = Niagara_Red, style=line,thickness=4:labels ≔ textplot2.7,0.2, starting,2.7,0.1,point, 1.175,2.175, optimal,1.2,2.05,point, 1.7, 0.3, second point, color=White,1.2, 0.575, third point, color=White: displayfeasibleRegion,obj2line,labels,lines,points
In the example given, the interior point method does not find a corner point as its solution to the problem. Instead, it finds a point that is in the center of the lines of optimal solutions. This behavior is common with interior point methods. In this simple example, an interior point method will most likely require more iterations than a simplex based method since the possible number of corner points visited is very small. In larger problems with higher dimensions the number of iterations for an interior point method can be much smaller than a simplex based method.
References