Matrix Form of Input for Optimization Package Commands - Maple Programming Help

Home : Support : Online Help : Mathematics : Optimization : Optimization Package : General information : Input forms : Optimization/General/MatrixForm

Matrix Form of Input for Optimization Package Commands

 This help page describes the Matrix form of input for commands in the Optimization package.  For general information on all the input forms accepted by the Optimization package commands, see the Optimization/InputForms help page.

Using Matrix Form

 • The solvers perform most efficiently when given problems in Matrix form, with the objective function and constraints specified as Vectors, Matrices, or procedures with Vector and Matrix parameters. Matrix form requires the least amount of preprocessing because it is most similar to the format used by the internal solvers. This leads to reduced storage needs and faster computation.
 • Consider an optimization problem in the form:
 minimize $f\left(x\right)$
 subject to
 $v\left(x\right)\le 0$ (nonlinear inequality constraints)
 $w\left(x\right)=0$ (nonlinear equality constraints)
 $A\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}.\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}x\le b$ (linear inequality constraints)
 $\mathrm{Aeq}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}.\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}x=\mathrm{beq}$ (linear equality constraints)
 $\mathrm{b1}\le x\le \mathrm{bu}$ (bounds)
 where $x$ is the vector of problem variables; $f\left(x\right)$ is the real-valued objective function; $v\left(x\right)$ and $w\left(x\right)$ are vector-valued functions of $x$; $b$, $\mathrm{beq}$, $\mathrm{bl}$, and $\mathrm{bu}$ are vectors; and $A$ and $\mathrm{Aeq}$ are matrices.  The dimension of $x$ is $n$. The relations involving matrices and vectors are element-wise. The specifications of each component are given in the following sections.

Objective Function

Linear Objective

 • The LPSolve command accepts a linear objective function $f\left(x\right)$ specified by a Vector parameter c having dimension $n$ and containing the coefficients of the objective function.

 • The QPSolve command accepts a quadratic objective function.  Suppose the function has the form $c\text{'}x+\frac{1}{2}x\text{'}Hx$ where c is a vector and H is the symmetric Hessian matrix.  Specify the function as [c, H] when the linear part exists, or H otherwise.  H must be an $n$ by $n$ symmetric Matrix and c must be a Vector of dimension $n$.  In the defined function, $c\text{'}$ and $x\text{'}$ refer to the vector transpose.

Nonlinear Objective

 • The NLPSolve command accepts a general nonlinear objective function f(x)  specified by a procedure p of type procedure[float](Vector).  The procedure takes one input Vector parameter of size n representing $x$ and returns the value of $f\left(x\right)$.  The value of n is passed separately to the NLPSolve command because it cannot be determined from the procedure p.
 • For best performance, provide the gradient of the objective function using the objectivegradient=objgrd option, where objgrd is of type procedure(Vector, Vector).  The procedure takes an input Vector parameter representing $x$, computes the values of the gradient of $f$ at $x$, and returns the results as the second Vector parameter.  The dimension of each Vector is n.

Linear Least-Squares Objective

 • The LSSolve command accepts a linear least-squares objective function. Suppose $f\left(x\right)$ is the linear least-squares objective function $\frac{1}{2}{‖c-Gx‖}^{2}$.  Specify $f\left(x\right)$ as a list [c, G] where c is a Vector of dimension $q$ and G is a $q$ by $n$ Matrix.

Nonlinear Least-Squares Objective

 • The LSSolve command accepts a nonlinear least-squares objective function. Suppose $f\left(x\right)$ has the form $\left(\frac{1}{2}\right)\left({\mathrm{f1}\left(x\right)}^{2}+{\mathrm{f2}\left(x\right)}^{2}+\mathrm{...}+{\mathrm{fq}\left(x\right)}^{2}\right)$.  Specify $f\left(x\right)$ as a procedure p having type procedure(Vector, Vector). The procedure takes an input Vector parameter of dimension n representing $x$, computes the residuals $\mathrm{f1}$, $\mathrm{f2}$, ..., $\mathrm{fq}$ at $x$, and returns these residuals as the second Vector parameter. The values of n and q are passed separately to the LSSolve command because they cannot be determined from the procedure p.
 • Procedure p may have a third parameter needfi having type float.  In this case, the body of p should be constructed so that if needfi < 0.0, all the residuals are computed; otherwise, only the $i$th residual need be computed where $i=\mathrm{trunc}\left(\mathrm{needfi}\right)$. This leads to more efficient computation as residuals are computed only when necessary.  Such a procedure might have the form:

 p := proc(x, y, needfi) if needfi < 0.0 or trunc(needfi) = 1 then y[1] := ...; end if; if needfi < 0.0 or trunc(needfi) = 2 then y[2] := ...; end if; ... end proc;

 • For best performance, provide the Jacobian matrix of the objective function using the objectivejacobian=objjac option, where objjac has type procedure(Vector, Matrix).  This procedure takes an input Vector parameter representing $x$, computes the values of the Jacobian of $f$ at $x$, and returns these values as the Matrix parameter of dimension q by n.

Constraints

Linear Constraints

 • Linear constraints are accepted by all the Optimization solvers that take Matrix-form input. They can take one of the following forms.
 $\left[\right]$ -- no linear constraints
 $\left[A,b\right]$ -- inequality constraints only
 $\left[A,b,\mathrm{Aeq},\mathrm{beq}\right]$ -- inequality and equality constraints
 $\left[\mathrm{NoUserValue},\mathrm{NoUserValue},\mathrm{Aeq},\mathrm{beq}\right]$ -- equality constraints only
 • A and b represent the components of the linear inequality constraints $A\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}.\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}x\le b$, while Aeq and beq represent the components of the linear equality constraints $\mathrm{Aeq}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}.\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}x=\mathrm{beq}$.
 • b and beq can have type Vector or numeric. When either constraint is a single numeric value, it is interpreted as a Vector in which each element is equal to that value.
 • A and Aeq must each be a Matrix with column dimension equal to $n$.  The row dimension of A must be equal to that of b when b is a Vector, and the row dimension of Aeq must be equal to that of beq when beq is a Vector.

Nonlinear Constraints

 • Nonlinear constraints are accepted by the NLPSolve, and LSSolve commands. Consider nonlinear inequality constraints $v\left(x\right)\le 0$ and nonlinear equality constraints $w\left(x\right)=0$.  Let $y\left(x\right)$ be the vector consisting of $v\left(x\right)$ followed by $w\left(x\right)$.  Then $y\left(x\right)$ is represented by a procedure pcons of type procedure(Vector, Vector), where the first parameter is the current point $x$ and the second is an output parameter containing the values of $y$ at the point $x$.  Because the Optimization routine cannot determine the number of inequality and equality constraints, this information must be specified in the calling sequence.
 • Procedure pcons may have a third parameter needc, which is an input Vector of floating-point values with dimension equal to the total number of constraints.  In this case, the body of p should be constructed so that it checks the value of needc[i] and if this is greater than 0.0, the left-hand side of the $i$th constraint is computed.  This leads to more efficient computation because constraint values are computed only when necessary.  Such a procedure might have the form:

 pcons := proc(x, y, needc) if needc[1] > 0.0 then y[1] := ...; end if; if needc[2] > 0.0 then y[2] := ...; end if; ... end proc;

 • For best performance, provide the Jacobian matrix of $y\left(x\right)$ using the constraintjacobian=conjac option, where conjac has type procedure(Vector, Matrix).  This procedure takes an input Vector parameter representing $x$, computes the values of the Jacobian of $y$ at $x$, and returns these values as the Matrix parameter.  Like procedure pcons described above, procedure conjac can take a third parameter needc.  Again, needc is a floating-point input Vector with $0<{\mathrm{needc}}_{i}$ indicating that the $i$th row of the Jacobian Matrix should be computed.

Bounds

 • Bounds on the problem variables are accepted by all the Optimization solvers that take Matrix-form input.  They are provided as a list $\left[\mathrm{bl},\mathrm{bu}\right]$ containing lower bounds followed by upper bounds.  If the list is empty, the bounds are assumed to be $-\mathrm{\infty }$ and $\mathrm{\infty }$ respectively.
 • The bounds bl and bu can be specified as Vectors of dimension $n$ or as numeric values.  If a numeric value is provided, it is interpreted as a Vector of dimension $n$ in which each element is equal to that value.
 • If there are upper bounds but no lower bounds, bl must be set to -Float(infinity).  If there are lower bounds, but no upper bounds, bu must be set to Float(infinity).
 • The assume=nonnegative option can be used to specify the constraint that all variables must be non-negative. For more information, see the Optimization/Options help page.
 • Linear programs using the interior point method require finite lower bounds to be specified for each variable.

Initial Values

 • Initial values are accepted by all the Optimization solvers that take Matrix-form input and are specified using the option initialpoint=p where p is a Vector of dimension $n$.  For more information about the initialpoint option, see the Optimization/Options help page.

Solution

 • The solution is a list $\left[\mathrm{objval},\mathrm{solpt}\right]$ where objval is a floating-point number representing the final minimum (or maximum) value and solpt is a Vector representing a point (the computed extremum).

 • For best performance, the components described previously must satisfy the following additional conditions.
 All Matrix and Vector arguments must be constructed with datatype=float. Otherwise, the solvers must perform conversions.
 A symmetric Matrix argument, such as the Hessian matrix for the QPSolve command, must be created with shape=symmetric and storage=rectangular. Otherwise, the solver must perform this conversion.
 When the interior point method for linear programs is used, the constraint matrices should be specified with the option storage=sparse. Otherwise, any constraint matrices with non-sparse storage will be converted to sparse storage.
 When possible, input procedures must be created so that they can be evaluated by the evalhf command. Otherwise, the solvers cannot take advantage of the more efficient evalhf mode of computation.
 Procedures returning a scalar result must return a floating-point value in all cases, not simply an expression that evaluates to a float.
 • The following additional condition must be met. If a procedure returns a Vector or Matrix result using an output parameter, only floating-point values can be assigned to the parameter.  Otherwise, an error is issued.

Examples

 > $\mathrm{with}\left(\mathrm{Optimization}\right):$

Express a linear program in Matrix form and solve it using the Optimization[LPSolve] command.

 > $c≔\mathrm{Vector}\left(\left[-1,-1\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $A≔\mathrm{Matrix}\left(\left[\left[-3,1\right],\left[5,1\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $b≔\mathrm{Vector}\left(\left[0.5,2\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $\mathrm{LPSolve}\left(c,\left[A,b\right],\left[0.0,\mathrm{∞}\right]\right)$
 $\left[{-}{1.25000000000000}{,}\left[\begin{array}{c}{0.187500000000000}\\ {1.06250000000000}\end{array}\right]\right]$ (1)

Express a quadratic program in Matrix form and solve it using the Optimization[QPSolve] command.

 > $c≔\mathrm{Vector}\left(\left[2,5\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $H≔\mathrm{Matrix}\left(\left[\left[6,3\right],\left[3,4\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $A≔\mathrm{Matrix}\left(\left[\left[-1,1\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $b≔\mathrm{Vector}\left(\left[-2\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $\mathrm{QPSolve}\left(\left[c,H\right],\left[A,b\right]\right)$
 $\left[{-}{3.53333333333333}{,}\left[\begin{array}{c}{0.466666666666667}\\ {-}{1.60000000000000}\end{array}\right]\right]$ (2)

Consider the nonlinear objective function ${w}^{3}{\left(v-w\right)}^{2}+{\left(w-x-1\right)}^{2}+{\left(x-y-2\right)}^{2}+{\left(y-z-3\right)}^{2}$ with a nonlinear constraint ${w}^{2}+{x}^{2}+y+z\le 5$ and a linear constraint $z=v$. Let V represent the Vector of variables $⟨v,x,w,y,z⟩$ and write the objective function as a procedure having V as an input parameter and returning a floating-point value.

 > p := proc (V)         V[3]^3*(V[1]-V[3])^2+(V[3]-V[2]-1)^2+         (V[2]-V[4]-2)^2+(V[4]-V[5]-3)^2      end proc:

Write the nonlinear constraint ${w}^{2}+{x}^{2}+y+z\le 5$ as a procedure having V as an input parameter and W as a one-dimensional Vector output parameter.  The needc parameter is optional.

 > nlc := proc(V, W, needc)            if needc[1] > 0.0 then              W[1] := V[3]^2+V[2]^2+V[4]+V[5]-5            end if:        end proc:

Construct the list representing linear constraints. In this case, there is a single equality constraint $z-u=0$.

 > $\mathrm{Aeq}≔\mathrm{Matrix}\left(\left[\left[-1,0,0,0,1\right]\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $\mathrm{beq}≔\mathrm{Vector}\left(\left[0\right],\mathrm{datatype}=\mathrm{float}\right):$
 > $\mathrm{lc}≔\left[\mathrm{NoUserValue},\mathrm{NoUserValue},\mathrm{Aeq},\mathrm{beq}\right]:$

For better performance, provide procedures representing the gradient of the objective function and the Jacobian of the constraints.  The objgra procedure takes V as an input parameter and returns the gradient values at V in the output Vector parameter W.

 > objgra := proc (V, W)                 W[1] := 2*V[3]^3*(V[1]-V[3]);                 W[2] :=-2*V[3]+4*V[2]-2-2*V[4];                 W[3] := 3*V[3]^2*(V[1]-V[3])^2-2*V[3]^3*(V[1]-V[3])+                         2*V[3]-2*V[2]-2;                 W[4] :=-2*V[2]+4*V[4]-2-2*V[5];                 W[5] :=-2*V[4]+2*V[5]+6               end proc:

The conjac procedure takes V as an input parameter and returns the Jacobian in the output Matrix parameter M.  The needc parameter is optional.

 > conjac := proc(V, M, needc)     if needc[1] > 0.0 then         M[1,1] := 0;         M[1,2] := 2*V[2];         M[1,3] := 2*V[3];         M[1,4] := 1;         M[1,5] := 1;     end if end proc:

Solve the nonlinear program with the Optimization[NLPSolve] command by providing the number of variables, 5, as the first argument.  Use the $\mathrm{assume}=\mathrm{nonnegative}$ option to specify that all variables must take non-negative values.

 > $\mathrm{NLPSolve}\left(5,p,1,\mathrm{nlc},\mathrm{lc},\mathrm{objectivegradient}=\mathrm{objgra},\mathrm{constraintjacobian}=\mathrm{conjac},\mathrm{assume}=\mathrm{nonnegative}\right)$
 $\left[{9.26002553760621794}{,}\left[\begin{array}{c}{0.}\\ {1.60516113308118}\\ {0.907741706680391}\\ {1.30258056909003}\\ {4.118049995}{}{{10}}^{{-310}}\end{array}\right]\right]$ (3)