codegen - Maple Programming Help

codegen

 compute the Gradient of a Maple procedure

Parameters

 F - Maple procedure X - list of symbols (parameters of F)

Description

 • The first argument F is a Maple procedure which computes a function of x1,x2,...,xn.  The GRADIENT command outputs a new procedure G, which when executed at given values for x1,x2,...,xn, returns a vector of the partial derivatives of F w.r.t. x1,...,xn at the given values.  This is known as automatic differentiation.  It often leads to a more efficient computation than symbolic differentiation, that is, what you would obtain from using linalg[grad]. For example, given

 F := proc(x,y) local t; t := exp(-x); y*t+t end proc;

 The output of G := GRADIENT(F); is the Maple procedure

 G := proc(x, y) local df, t; t := exp(-x); df := array(1 .. 1); df[1] := y + 1; return -df[1]*exp(-x), t end proc

 When G is called with arguments $\left(1.0,1.0\right)$, the output is the vector $\left(-0.7357588824,0.3678794412\right)$. G can subsequently be optimized by optimize(G) so that the exponential is computed only once, to obtain

 proc(x, y) local df, t; t := exp(-x); df := array(1 .. 1); df[1] := y + 1; return -df[1]*t, t end proc

 One can obtain derivatives of any function which can be represented by a computation constructed by composition of arithmetic operators and any mathematical functions, including functions containing loops and making subroutine calls.
 The remaining arguments to GRADIENT are optional, they are described in the following. There are a number of limitations on the Maple procedure F that the GRADIENT command can handle.  These are also discussed below.
 • By default, GRADIENT computes the partial derivatives of F w.r.t.  all formal parameters present in F. The optional argument X, a list of symbols, may be used to specify which formal parameters to take the derivative w.r.t. For example, the call GRADIENT(F,[x]); computes the partial derivative of F with respect to the formal parameter x only. The result is

 proc(x, y) local df, t; t := exp(-x); df := array(1 .. 1); df[1] := y + 1; return -df[1]*exp(-x) end proc

 The procedure F may not refer to it's parameters using args[i]. For example, you may not define F as follows

 F := proc() local t; t := exp(-args[1]); args[2]*t+t end proc;

 Nor is it possible to differentiate w.r.t. an array of parameters. For example, you cannot presently define F as

 F := proc(x::array(1..2)) local t; t := exp(-x[1]); x[2]*t+t end proc;

 and compute the gradient w.r.t. the array x.
 • Two algorithms are supported, the so-called forward and reverse modes. By default, GRADIENT tries to use the reverse mode since it usually leads to a more efficient code.  If it is unable to use the reverse mode, the forward mode is used.  The user may specify which algorithm is to be used by giving the optional argument mode=forward or mode=reverse.
 • The advantage of the reverse mode is that the cost of computing the gradient is usually cheaper than the forward mode, especially if the number of partial derivatives n is high.  It is also cheaper than symbolic differentiation (Maple's diff command) and divided differences.  Specifically, if the cost of computing F is m arithmetic operations, then the cost of computing the Gradient of F using the forward and reverse modes is O(m n) and O(m+n) respectively.  However, the best result is usually obtained by pre-optimizing the input procedure F and post-optimizing the output procedure of Gradient. Also, before applying GRADIENT it is best to split up long products in F.  This can be done with the codegen[split] command.
 • The vector of partial derivatives is, by default, returned as a sequence. The optional argument result_type=list, result_type=array, or result_type=seq specifies that the vector of derivatives returned by G is to be a Maple list, array, and sequence respectively. For example, the call GRADIENT(F,result_type=array); yields

 proc(x, y) local df, grd, t; t := exp(-x); df := array(1 .. 1); df[1] := y + 1; grd := array(1 .. 2); grd[1] := - df[1]*exp(-x); grd[2] := t; return grd end proc

 • The optional argument function_value=true causes GRADIENT to compute the function value $F\left(\mathrm{x1},...,\mathrm{xn}\right)$ at the same time as the Gradient. It is returned as the first value in the vector.  For example the call GRADIENT(F,function_value=true); yields

 proc(x, y) local df, t; t := exp(-x); df := array(1 .. 1); df[1] := y + 1; return y*t + t, - df[1]*exp(-x), t end proc

 • See also codegen[HESSIAN] and codegen[JACOBIAN] for routines for computing Hessians and Jacobians using automatic differentiation of functions represented by Maple procedures. Also, the derivative operator D in Maple can be used to compute a derivative of a Maple procedure in one variable.
 • The command with(codegen,GRADIENT) allows the use of the abbreviated form of this command.

Examples

 > $\mathrm{with}\left(\mathrm{codegen}\right):$
 > F := proc(x,y) local t; t := x*y; x+t-y*t; end proc;
 ${F}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{x}{+}{t}{-}{y}{*}{t}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (1)
 > $F\left(x,y\right)$
 ${-}{x}{}{{y}}^{{2}}{+}{x}{}{y}{+}{x}$ (2)
 > $G≔\mathrm{GRADIENT}\left(F\right)$
 ${G}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{1}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{−}{y}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{y}{*}{\mathrm{df}}{[}{1}{]}{+}{1}{,}{x}{*}{\mathrm{df}}{[}{1}{]}{-}{t}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (3)
 > $G\left(x,y\right)$
 ${y}{}\left({-}{y}{+}{1}\right){+}{1}{,}{x}{}\left({-}{y}{+}{1}\right){-}{x}{}{y}$ (4)
 > $G≔\mathrm{GRADIENT}\left(F,\mathrm{mode}=\mathrm{forward}\right)$
 ${G}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dt}}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dt}}{:=}{\mathrm{array}}{}\left({1}{..}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dt}}{[}{1}{]}{:=}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dt}}{[}{2}{]}{:=}{x}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dt}}{[}{1}{]}{*}\left({−}{y}{+}{1}\right){+}{1}{,}{\mathrm{dt}}{[}{2}{]}{*}\left({−}{y}{+}{1}\right){-}{t}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (5)
 > $G\left(x,y\right)$
 ${y}{}\left({-}{y}{+}{1}\right){+}{1}{,}{x}{}\left({-}{y}{+}{1}\right){-}{x}{}{y}$ (6)
 > $G≔\mathrm{GRADIENT}\left(F,\mathrm{function_value}=\mathrm{true}\right)$
 ${G}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{1}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{−}{y}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{−}{t}{*}{y}{+}{t}{+}{x}{,}{y}{*}{\mathrm{df}}{[}{1}{]}{+}{1}{,}{x}{*}{\mathrm{df}}{[}{1}{]}{-}{t}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (7)
 > $G\left(x,y\right)$
 ${-}{x}{}{{y}}^{{2}}{+}{x}{}{y}{+}{x}{,}{y}{}\left({-}{y}{+}{1}\right){+}{1}{,}{x}{}\left({-}{y}{+}{1}\right){-}{x}{}{y}$ (8)
 > $G≔\mathrm{GRADIENT}\left(F,\mathrm{result_type}=\mathrm{array}\right)$
 ${G}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{\mathrm{grd}}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{1}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{−}{y}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{:=}{\mathrm{array}}{}\left({1}{..}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{1}{]}{:=}{y}{*}{\mathrm{df}}{[}{1}{]}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{2}{]}{:=}{x}{*}{\mathrm{df}}{[}{1}{]}{-}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (9)
 > $G≔\mathrm{GRADIENT}\left(F,\mathrm{result_type}=\mathrm{list}\right)$
 ${G}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{1}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{−}{y}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[{y}{*}{\mathrm{df}}{[}{1}{]}{+}{1}{,}{x}{*}{\mathrm{df}}{[}{1}{]}{-}{t}\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (10)
 > $H≔\mathrm{GRADIENT}\left(G,\mathrm{result_type}=\mathrm{list}\right)$
 ${H}{:=}{\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{\mathrm{df1}}{,}{\mathrm{dfr0}}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{x}{*}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df1}}{:=}{−}{y}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{:=}{\mathrm{array}}{}\left({1}{..}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{2}{]}{:=}{y}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{2}{]}{:=}{x}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{1}{]}{:=}{−}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[\left[{0}{,}{\mathrm{df1}}{-}{\mathrm{df}}{[}{2}{]}\right]{,}\left[{y}{*}{\mathrm{dfr0}}{[}{1}{]}{+}{\mathrm{df1}}{,}{x}{*}{\mathrm{dfr0}}{[}{1}{]}{-}{\mathrm{dfr0}}{[}{2}{]}\right]\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (11)
 > $\mathrm{optimize}\left(H\right)$
 ${\mathbf{proc}}\left({x}{,}{y}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{:=}{−}{2}{*}{y}{+}{1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[\left[{0}{,}{\mathrm{t1}}\right]{,}\left[{\mathrm{t1}}{,}{−}{2}{*}{x}\right]\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (12)

This next example illustrates pre and post optimization. The gradient is computed with respect to phi and omega only. Since the torus procedure returns three values, the Gradient is really the Jacobian matrix of partial derivatives.

 > torus  := proc(phi,omega,R,r) local x,y,z;      x := cos(phi)*(R+r*cos(omega));      y := sin(phi)*(R+r*cos(omega));      z := r*sin(omega);      [x,y,z] end proc:
 > $\mathrm{torus}≔\mathrm{optimize}\left(\mathrm{torus}\right)$
 ${\mathrm{torus}}{:=}{\mathbf{proc}}\left({\mathrm{phi}}{,}{\mathrm{omega}}{,}{R}{,}{r}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{,}{\mathrm{t2}}{,}{\mathrm{t4}}{,}{\mathrm{t5}}{,}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{:=}{\mathrm{cos}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t2}}{:=}{\mathrm{cos}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t4}}{:=}{r}{*}{\mathrm{t2}}{+}{R}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t5}}{:=}{\mathrm{sin}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t6}}{:=}{\mathrm{sin}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[{\mathrm{t1}}{*}{\mathrm{t4}}{,}{\mathrm{t5}}{*}{\mathrm{t4}}{,}{r}{*}{\mathrm{t6}}\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (13)
 > $G≔\mathrm{GRADIENT}\left(\mathrm{torus},\left[\mathrm{φ},\mathrm{ω}\right],\mathrm{result_type}=\mathrm{list}\right)$
 ${G}{:=}{\mathbf{proc}}\left({\mathrm{phi}}{,}{\mathrm{omega}}{,}{R}{,}{r}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{\mathrm{dfr0}}{,}{\mathrm{dfr1}}{,}{\mathrm{t1}}{,}{\mathrm{t2}}{,}{\mathrm{t4}}{,}{\mathrm{t5}}{,}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{:=}{\mathrm{cos}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t2}}{:=}{\mathrm{cos}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t4}}{:=}{r}{*}{\mathrm{t2}}{+}{R}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t5}}{:=}{\mathrm{sin}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t6}}{:=}{\mathrm{sin}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{5}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{:=}{\mathrm{array}}{}\left({1}{..}{5}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr1}}{:=}{\mathrm{array}}{}\left({1}{..}{5}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{3}{]}{:=}{\mathrm{t1}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{2}{]}{:=}{\mathrm{df}}{[}{3}{]}{*}{r}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{\mathrm{t4}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{4}{]}{:=}{\mathrm{t4}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{3}{]}{:=}{\mathrm{t5}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{2}{]}{:=}{\mathrm{dfr0}}{[}{3}{]}{*}{r}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr1}}{[}{5}{]}{:=}{r}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[\left[{−}{\mathrm{df}}{[}{1}{]}{*}{\mathrm{sin}}{}\left({\mathrm{phi}}\right){,}{−}{\mathrm{df}}{[}{2}{]}{*}{\mathrm{sin}}{}\left({\mathrm{omega}}\right)\right]{,}\left[{\mathrm{dfr0}}{[}{4}{]}{*}{\mathrm{cos}}{}\left({\mathrm{phi}}\right){,}{−}{\mathrm{dfr0}}{[}{2}{]}{*}{\mathrm{sin}}{}\left({\mathrm{omega}}\right)\right]{,}\left[{0}{,}{\mathrm{dfr1}}{[}{5}{]}{*}{\mathrm{cos}}{}\left({\mathrm{omega}}\right)\right]\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (14)
 > $\mathrm{optimize}\left(G\right)$
 ${\mathbf{proc}}\left({\mathrm{phi}}{,}{\mathrm{omega}}{,}{R}{,}{r}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{\mathrm{dfr0}}{,}{\mathrm{t1}}{,}{\mathrm{t2}}{,}{\mathrm{t3}}{,}{\mathrm{t4}}{,}{\mathrm{t5}}{,}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{:=}{\mathrm{cos}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t2}}{:=}{\mathrm{cos}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t3}}{:=}{r}{*}{\mathrm{t2}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t4}}{:=}{\mathrm{t3}}{+}{R}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t5}}{:=}{\mathrm{sin}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t6}}{:=}{\mathrm{sin}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{5}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{:=}{\mathrm{array}}{}\left({1}{..}{5}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{2}{]}{:=}{\mathrm{t1}}{*}{r}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{2}{]}{:=}{\mathrm{t5}}{*}{r}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[\left[{−}{\mathrm{t5}}{*}{\mathrm{t4}}{,}{−}{\mathrm{df}}{[}{2}{]}{*}{\mathrm{t6}}\right]{,}\left[{\mathrm{t1}}{*}{\mathrm{t4}}{,}{−}{\mathrm{dfr0}}{[}{2}{]}{*}{\mathrm{t6}}\right]{,}\left[{0}{,}{\mathrm{t3}}\right]\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (15)

We redo the same example but this time using the makeproc command to first build the torus procedure, returning an array instead of a list.

 > $A≔\mathrm{array}\left(\left[\mathrm{cos}\left(\mathrm{φ}\right)\left(R+r\mathrm{cos}\left(\mathrm{ω}\right)\right),\mathrm{sin}\left(\mathrm{φ}\right)\left(R+r\mathrm{cos}\left(\mathrm{ω}\right)\right),r\mathrm{sin}\left(\mathrm{ω}\right)\right]\right):$
 > torus := optimize( makeproc(A,[phi,omega,R,r]) );
 ${\mathrm{torus}}{:=}{\mathbf{proc}}\left({\mathrm{phi}}{,}{\mathrm{omega}}{,}{R}{,}{r}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{A}{,}{\mathrm{t1}}{,}{\mathrm{t2}}{,}{\mathrm{t4}}{,}{\mathrm{t5}}{,}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{A}{:=}{\mathrm{array}}{}\left({1}{..}{3}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{:=}{\mathrm{cos}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t2}}{:=}{\mathrm{cos}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t4}}{:=}{\mathrm{t2}}{*}{r}{+}{R}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{A}{[}{1}{]}{:=}{\mathrm{t1}}{*}{\mathrm{t4}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t5}}{:=}{\mathrm{sin}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{A}{[}{2}{]}{:=}{\mathrm{t5}}{*}{\mathrm{t4}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t6}}{:=}{\mathrm{sin}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{A}{[}{3}{]}{:=}{r}{*}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{A}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (16)
 > $G≔\mathrm{optimize}\left(\mathrm{GRADIENT}\left(\mathrm{torus},\left[\mathrm{φ},\mathrm{ω}\right],\mathrm{result_type}=\mathrm{array}\right)\right)$
 ${G}{:=}{\mathbf{proc}}\left({\mathrm{phi}}{,}{\mathrm{omega}}{,}{R}{,}{r}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{\mathrm{dfr0}}{,}{\mathrm{grd}}{,}{\mathrm{t1}}{,}{\mathrm{t2}}{,}{\mathrm{t4}}{,}{\mathrm{t5}}{,}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t1}}{:=}{\mathrm{cos}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t2}}{:=}{\mathrm{cos}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t4}}{:=}{r}{*}{\mathrm{t2}}{+}{R}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t5}}{:=}{\mathrm{sin}}{}\left({\mathrm{phi}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{t6}}{:=}{\mathrm{sin}}{}\left({\mathrm{omega}}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{8}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{:=}{\mathrm{array}}{}\left({1}{..}{8}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{2}{]}{:=}{r}{*}{\mathrm{t1}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{2}{]}{:=}{\mathrm{t5}}{*}{r}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{:=}{\mathrm{array}}{}\left({1}{..}{3}{,}{1}{..}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{1}{,}{1}{]}{:=}{−}{\mathrm{t5}}{*}{\mathrm{t4}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{1}{,}{2}{]}{:=}{−}{\mathrm{df}}{[}{2}{]}{*}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{2}{,}{1}{]}{:=}{\mathrm{t1}}{*}{\mathrm{t4}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{2}{,}{2}{]}{:=}{−}{\mathrm{dfr0}}{[}{2}{]}{*}{\mathrm{t6}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{3}{,}{1}{]}{:=}{0}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}{[}{3}{,}{2}{]}{:=}{r}{*}{\mathrm{t2}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{grd}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (17)

This example shows that local procedures are handled.

 > f := proc(x) local s, t;   s := proc(x) local e; e := exp(x); (e-1/e)/2 end proc;   t := s(x^2);   1-x*t+x^2*t; end proc:
 > $\mathrm{GRADIENT}\left(f\right)$
 ${\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{,}{\mathrm{ds}}{,}{\mathrm{lf1}}{,}{s}{,}{t}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{ds}}{:=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{e}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{e}{:=}{\mathrm{exp}}{}\left({x}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{1}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{1}{/}{2}{+}{1}{/}\left({2}{*}{e}{^}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\left[{\mathrm{df}}{[}{1}{]}{*}{\mathrm{exp}}{}\left({x}\right)\right]\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{s}{:=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{e}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{e}{:=}{\mathrm{exp}}{}\left({x}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{1}{/}{2}{*}{e}{-}{1}{/}\left({2}{*}{e}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{t}{:=}{s}{}\left({x}{^}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{:=}{\mathrm{array}}{}\left({1}{..}{1}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{lf1}}{:=}{\mathrm{ds}}{}\left({x}{^}{2}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{dfr0}}{[}{1}{]}{:=}{x}{^}{2}{-}{x}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{2}{*}{x}{*}{\mathrm{dfr0}}{[}{1}{]}{*}{\mathrm{lf1}}{[}{1}{]}{+}{2}{*}{t}{*}{x}{-}{t}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (18)

This final example illustrates the need for breaking up large products.

 > F := proc(u,v,w,x,y,z) u*v*w*x*y*z end proc;
 ${F}{:=}{\mathbf{proc}}\left({u}{,}{v}{,}{w}{,}{x}{,}{y}{,}{z}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{u}{*}{v}{*}{w}{*}{x}{*}{y}{*}{z}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (19)
 > $G≔\mathrm{GRADIENT}\left(F\right)$
 ${G}{:=}{\mathbf{proc}}\left({u}{,}{v}{,}{w}{,}{x}{,}{y}{,}{z}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{v}{*}{w}{*}{x}{*}{y}{*}{z}{,}{u}{*}{w}{*}{x}{*}{y}{*}{z}{,}{u}{*}{v}{*}{x}{*}{y}{*}{z}{,}{u}{*}{v}{*}{w}{*}{y}{*}{z}{,}{u}{*}{v}{*}{w}{*}{x}{*}{z}{,}{u}{*}{v}{*}{w}{*}{x}{*}{y}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (20)
 > $\mathrm{cost}\left(G\right)$
 ${24}{}{\mathrm{multiplications}}$ (21)
 > $F≔\mathrm{split}\left(F\right)$
 ${F}{:=}{\mathbf{proc}}\left({u}{,}{v}{,}{w}{,}{x}{,}{y}{,}{z}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s0}}{,}{\mathrm{s1}}{,}{\mathrm{s2}}{,}{\mathrm{s3}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s0}}{:=}{u}{*}{v}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s1}}{:=}{w}{*}{x}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s2}}{:=}{y}{*}{z}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s3}}{:=}{\mathrm{s0}}{*}{\mathrm{s1}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s3}}{*}{\mathrm{s2}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (22)
 > $G≔\mathrm{GRADIENT}\left(F\right)$
 ${G}{:=}{\mathbf{proc}}\left({u}{,}{v}{,}{w}{,}{x}{,}{y}{,}{z}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{,}{\mathrm{s0}}{,}{\mathrm{s1}}{,}{\mathrm{s2}}{,}{\mathrm{s3}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s0}}{:=}{u}{*}{v}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s1}}{:=}{w}{*}{x}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s2}}{:=}{y}{*}{z}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{s3}}{:=}{\mathrm{s0}}{*}{\mathrm{s1}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{:=}{\mathrm{array}}{}\left({1}{..}{4}\right){;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{4}{]}{:=}{\mathrm{s2}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{3}{]}{:=}{\mathrm{s3}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{2}{]}{:=}{\mathrm{df}}{[}{4}{]}{*}{\mathrm{s0}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{df}}{[}{1}{]}{:=}{\mathrm{df}}{[}{4}{]}{*}{\mathrm{s1}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{v}{*}{\mathrm{df}}{[}{1}{]}{,}{u}{*}{\mathrm{df}}{[}{1}{]}{,}{\mathrm{df}}{[}{2}{]}{*}{x}{,}{\mathrm{df}}{[}{2}{]}{*}{w}{,}{\mathrm{df}}{[}{3}{]}{*}{z}{,}{\mathrm{df}}{[}{3}{]}{*}{y}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (23)
 > $\mathrm{cost}\left(G\right)$
 ${8}{}{\mathrm{storage}}{+}{9}{}{\mathrm{assignments}}{+}{12}{}{\mathrm{multiplications}}{+}{12}{}{\mathrm{subscripts}}$ (24)