Applications of the Global Optimization Toolbox - Maple Programming Help

Home : Support : Online Help : Toolboxes : Global Optimization : examples/GlobalOptimization

Applications of the Global Optimization Toolbox

 Introduction to Optimization The goal of optimization is to find the best solution to a problem from a set of feasible solutions. The solutions are compared using a real-valued objective function of one or more problem variables. The feasible set is determined by the constraints, a system of inequalities or equations in the problem variables. Mathematically, the goal is to find a point that maximizes or minimizes the objective function while satisfying the constraints. Such a point is called an extremum. Optimization problems are specified as follows.                   Minimize (or maximize)  $f\left(x\right)$                  subject to         ${g}_{i}\left(x\right)\le 0$, $i=1..p$,                 and         ${h}_{i}\left(x\right)=0$, $i=1..m$                 where $x\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\in \phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{R}^{n}$   Optimization models are classified by the structure of $f\left(x\right)$, ${g}_{i}\left(x\right)$, and ${h}_{i}\left(x\right)$. If all the functions are linear in x, the model is a linear program. If $f\left(x\right)$ is quadratic in x, with ${g}_{i}\left(x\right)$ and ${h}_{i}\left(x\right)$ linear in x, the model is a quadratic program. If $f\left(x\right)$ is a sum of squared functions, the model is a least-squares problem. For any other structure, the model is called a nonlinear program (NLP). The Optimization package that ships with Maple provides a set of algorithms for solving each of these classes.   Traditionally, optimization research has focused on local search. The goal of local search is to find a local extremum of f(x) in the feasible region. From an initial point, local search algorithms iteratively search for a point in a neighborhood of the current point that improves the objective function value, while maintaining or approaching feasibility, using derivative information about $f\left(x\right)$, ${g}_{i}\left(x\right)$, and ${h}_{i}\left(x\right)$ at the current point. Ideally, the search terminates at a feasible local extremum. Algorithms differ according to how they measure closeness to feasibility, and how they perform the search.   Local search is effective when $f\left(x\right)$ has a unique local extremum within the feasible region, because the local solution found by the search is the global solution to the problem. The extremum is unique if all of $f\left(x\right)$, ${g}_{i}\left(x\right)$ are convex functions and all of ${h}_{i}\left(x\right)$ are affine functions.   Local search can perform poorly when $f\left(x\right)$ has many local extrema within the feasible region. In this case, the local extremum found depends on the starting point and may not be the global solution. Multiple local minima can be present when any of $f\left(x\right)$, ${g}_{i}\left(x\right)$ are nonconvex functions or any of ${h}_{i}\left(x\right)$ are nonlinear functions. Nonconvexity often arises in real-world problems and is perhaps the greatest obstacle to solving optimization problems.     Global optimization is one of the newest and most successful approaches to overcoming this obstacle.

The algorithms in the Optimization package are local search methods. Local search methods use first and second derivatives to find a local extremum, iteratively producing points that improve the objective function while maintaining or approaching feasibility.

The algorithms in the Global Optimization toolbox are global search methods. Instead of using derivatives to find a local extremum, they systematically search the entire feasible region for a global extremum. Global search methods find a global extremum in optimization models that have many local extrema. These methods do not rely on an initial point.

Although global search does not rely on starting points, it does require finite bounds on all decision variables, to bound the search space. Also, the computational effort for global search methods increases rapidly with the number of variables, more so than for local search methods. This often does not become noticeable until the number of variables reaches the thousands.

To see the advantages of global optimization over local optimization, consider the following example problems and compare the local nonlinear solver NLPSolve from the Optimization package with the GlobalSolve command from the Global Optimization Toolbox.

 > restart; interface(warnlevel=0): with( plots ): with( Optimization ): with( GlobalOptimization ):

Quartic Polynomial

Find the global minimum of this quartic polynomial over the interval [0, 5]. Note that the function is nonconvex and has two local minima.

 > poly := (x-1)*(x-2)*(x-3.5)*(x-4.8);
 ${\mathrm{poly}}{:=}\left({x}{-}{1}\right){}\left({x}{-}{2}\right){}\left({x}{-}{3.5}\right){}\left({x}{-}{4.8}\right)$ (2.1.1)

 > plot( poly, x=0..5 );

Depending on the starting point provided, the built-in local NLP solver returns the global minimum at approximately $x=4.298$ or the sub-optimal local minimum at approximately $x=1.407$. The NLP solver returns whichever local minimum to which its algorithm converges.

 > NLPSolve( poly, initialpoint=[x=5] ); NLPSolve( poly, initialpoint=[x=3] ); NLPSolve( poly, initialpoint=[x=2] ); NLPSolve( poly, initialpoint=[x=0] );
 $\left[{-}{1.71396627012779357}{,}\left[{x}{=}{1.40679933068723}\right]\right]$
 $\left[{-}{3.03603863879697267}{,}\left[{x}{=}{4.29790996157305}\right]\right]$
 $\left[{-}{1.71396627012779401}{,}\left[{x}{=}{1.40679932978431}\right]\right]$
 $\left[{-}{1.71396627012779468}{,}\left[{x}{=}{1.40679933068591}\right]\right]$ (2.1.2)

The GlobalOptimization solver searches for the global minimum. Note that the calling sequence does not rely on a starting point but does require a finite range for $x$, which is set to 0..5 for this problem.

 > GlobalSolve( poly, x=0..5 );
 $\left[{-}{3.03603863821012698}{,}\left[{x}{=}{4.29790181062472}\right]\right]$ (2.1.3)
 >

Find the global minimum of a 2-D nonconvex quadratic function over a rectangle.

 > with( LinearAlgebra ):

Construct the nonconvex function using a non-positive-definite matrix.

 > Q := Matrix( <<1.9,-3.5>|<5,-1/3>> );
 ${Q}{:=}\left[\begin{array}{cc}{1.9}& {5}\\ {-}{3.5}& {-}\frac{{1}}{{3}}\end{array}\right]$ (2.2.1)

Check that the matrix is not positive definite.

 > IsDefinite( Q );
 ${\mathrm{false}}$ (2.2.2)

Construct the function ${x}^{t}Qx$.

 > NonconvexQuadratic := expand( Transpose() . Q . );
 ${\mathrm{NonconvexQuadratic}}{:=}{1.9}{}{{x}}^{{2}}{+}{1.5}{}{x}{}{y}{-}\frac{{1}}{{3}}{}{{y}}^{{2}}$ (2.2.3)

Plot the function and its contours. There are global minima at approximately [3.947, -10.000] and [-3.947, 10.000], and a saddle point at [0, 0].

 > p1:=plot3d( NonconvexQuadratic, x=-10..10, y=-10..10, axes=boxed, transparency=.7, style=patch): p2:=plot3d( -100, x=-10..10, y=-10..10, style=patchnogrid, color=NonconvexQuadratic ): plots[display]( p1, p2, orientation=[125,60] );

From most starting points, the Maple local solver finds the global minimum at [3.947, -10.000].

 > NLPSolve( NonconvexQuadratic, x=-10..10, y=-10..10, initialpoint=[x=.1, y=0]);
 $\left[{-}{62.9385964912280613}{,}\left[{x}{=}{3.94736842105263}{,}{y}{=}{-}{10.}\right]\right]$ (2.2.4)

If the starting point is chosen poorly, the local solver stops at the saddle point [0, 0]. The solver stops because it detects a zero gradient at its current point. Saddle points have a zero gradient but are not extrema.

 > NLPSolve( NonconvexQuadratic, x=-10..10, y=-10..10, initialpoint=[x=0, y=0]);
 $\left[{0.}{,}\left[{x}{=}{0.}{,}{y}{=}{0.}\right]\right]$ (2.2.5)

The global solver always finds one of the two global minima. Finite ranges for $x$ and $y$ must be specified.

 $\left[{-}{62.9385964912280}{,}\left[{x}{=}{-}{3.94736842105263}{,}{y}{=}{10.}\right]\right]$ (2.2.6)

Nonconvex Feasible Region

Find the global minimum of a 2-D convex function over a plane minus the unit disk, which is a nonconvex region.

 > obj := x^2 + y^2 - x + y;
 ${\mathrm{obj}}{:=}{{x}}^{{2}}{+}{{y}}^{{2}}{-}{x}{+}{y}$ (2.3.1)
 > constraint := x^2 + y^2 >= 1;
 ${\mathrm{constraint}}{:=}{1}{\le }{{x}}^{{2}}{+}{{y}}^{{2}}$ (2.3.2)

The global minimum is at approximately [.707, -.707], shown in the following plot in red at the boundary of the feasible region. Several contours of the objective function are shown.

Starting from the point [-2, 2], the Maple local solver finds the non-optimal local minimum at approximately [-.707, .707], shown in blue. The gradient of the objective function at [-.707, .707] is perpendicular to the boundary of the infeasible region, terminating the local search.

 > p1 := plottools[disk]([0,0],1,color=green): p2 := plots[contourplot](obj, x=-2..2, y=-2..2, coloring=[blue,red], contours=20): p3 := plots[pointplot]([[.707,-.707]],color=red, symbolsize=20, symbol=circle): p4 := plots[pointplot]([[-.707,.707],[-2,2] ],color=blue, symbolsize=20, symbol=circle): plots[display]( p1, p2, p3, p4 );

Local search from [-2, 2] finds a suboptimal local minimum.

 > NLPSolve( obj, {constraint}, x=-2..2, y=-2..2, initialpoint=[x=-2,y=2]);
 $\left[{2.41421356273966214}{,}\left[{x}{=}{-}{0.707106781262466}{,}{y}{=}{0.707106781262466}\right]\right]$ (2.3.3)

Global search finds the global minimum at [.707, -.707].

 > GlobalSolve( obj, {constraint}, x=-2..2, y=-2..2);
 $\left[{-}{0.414213562373094923}{,}\left[{x}{=}{0.707106781186557}{,}{y}{=}{-}{0.707106781186538}\right]\right]$ (2.3.4)

Flat Functions of Higher Dimension

Consider a 4-D function from optimization benchmark tests. This function has an extremely flat, but non-constant, region near the global minimum. Local search methods tend to find the flat region, but terminate before finding the global minimum because the gradient of the function in that region is nearly zero.

 > objWood := 100*(y-x^2)^2+(1-x)^2+90*(z-w^2)^2+(1-w)^2+10.1*((y-1)^2+(z-1)^2)+19.8*(y-1)*(z-1);
 ${\mathrm{objWood}}{:=}{100}{}{\left({-}{{x}}^{{2}}{+}{y}\right)}^{{2}}{+}{\left({1}{-}{x}\right)}^{{2}}{+}{90}{}{\left({-}{{w}}^{{2}}{+}{z}\right)}^{{2}}{+}{\left({1}{-}{w}\right)}^{{2}}{+}{10.1}{}{\left({y}{-}{1}\right)}^{{2}}{+}{10.1}{}{\left({z}{-}{1}\right)}^{{2}}{+}{19.8}{}\left({y}{-}{1}\right){}\left({z}{-}{1}\right)$ (2.4.1)

The global minimum is at $w$ = $x$ = $y$ = $z$ = 1. To see why this is a difficult function to minimize, consider its 2-D projection fixing $z=1$ and $w=1$.

 > objWoodXY := eval( objWood, { z=1, w=1 } );
 ${\mathrm{objWoodXY}}{:=}{100}{}{\left({-}{{x}}^{{2}}{+}{y}\right)}^{{2}}{+}{\left({1}{-}{x}\right)}^{{2}}{+}{10.1}{}{\left({y}{-}{1}\right)}^{{2}}$ (2.4.2)

The contours indicate that the function is extremely flat around the origin. Local search methods generally find a local minimum in this region and terminate.

 > p1:=plot3d( objWoodXY+800, x=-2..2, y=-2..2, axes=boxed, transparency=.5, style=patch): p2:=plot3d(0, x=-2..2, y=-2..2, style=patchnogrid, color=objWoodXY): plots[display](p1,p2, orientation=[39,72] );

The Maple local search terminates before finding the global minimum, but the global search finds the minimum easily.

 > NLPSolve( objWood, initialpoint=[w=3,x=-2,y=2,z=-2] ); GlobalSolve( objWood, w=-10..10, x=-10..10, y=-10..10, z=-10..10 );
 $\left[{0.148242645237016024}{,}\left[{w}{=}{1.10990339256281}{,}{x}{=}{0.942001104081936}{,}{y}{=}{0.872155202370579}{,}{z}{=}{1.22185294890580}\right]\right]$
 $\left[{2.40559804893031305}{}{{10}}^{{-21}}{,}\left[{w}{=}{1.00000000001882}{,}{x}{=}{0.999999999979397}{,}{y}{=}{0.999999999961342}{,}{z}{=}{1.00000000003571}\right]\right]$ (2.4.3)
 >

Industrial Applications of Global Optimization

The examples in the previous section are simple exercises that demonstrate how global search methods can greatly outperform local search methods. Following are some real-world applications of global optimization.

Tuning an Automobile Suspension System

Consider the problem of designing a suspension system that exhibits some specified behavior in response to a bump in the road. The problem variables are the spring constant k and the damper constant b. Given the mass of the car on each wheel, m, and the expected amplitude of a typical bump, find values for k and b to create a system response that matches the desired response.

The objective function to be minimized is the squared error between the desired and actual response as a function of k and b over a discrete set of times. After deriving the actual response by solving the differential equation of the system, use optimization to find the values of k and b that minimize the error.

An Example Target Response

Suppose the target response is a decaying exponential function. When the automobile hits a bump of amplitude 0.1 meter, the amplitude of its oscillations decays exponentially.

 > restart;
 > Target := t->(0.14*exp(-4.9*t)*cos(5*t -.7754)):
 > plot( Target(t), t = 0..1, title="Target Response", labels=["seconds","metres"], thickness=2 );

Deriving the Actual Response

To derive the actual response of the system to a bump, solve the differential equation of the system with the initial condition x(0) = 0.1.

The differential equation for an unforced mass-spring-damper system:

 > System_Equation := m*diff(x(t),t,t) + b*diff(x(t),t) + k*x(t) = 0;
 ${\mathrm{System_Equation}}{:=}{m}{}\left(\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{t}}^{{2}}}{}{x}{}\left({t}\right)\right){+}{b}{}\left(\frac{{ⅆ}}{{ⅆ}{t}}{}{x}{}\left({t}\right)\right){+}{k}{}{x}{}\left({t}\right){=}{0}$ (3.1.2.1)

Solve the system and define the response as a function of k, b, and t, fixing m=450 kg.

 > m:=450:
 > sol := dsolve( {System_Equation, x(0)=.1, D(x)(0)=0} );
 ${\mathrm{sol}}{:=}{x}{}\left({t}\right){=}\frac{{1}}{{20}}{}\frac{\left(\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{\left({-}\frac{{1}}{{900}}{}{b}{+}\frac{{1}}{{900}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}\right){}{t}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}\frac{{1}}{{20}}{}\frac{\left({-}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{\left({-}\frac{{1}}{{900}}{}{b}{-}\frac{{1}}{{900}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}\right){}{t}}}{{{b}}^{{2}}{-}{1800}{}{k}}$ (3.1.2.2)

The actual response of the system as a function of k, b, and t:

 > Actual := unapply( rhs( sol ), k, b, t):

Overlay plots of the actual response, with k=10^4 N/m and b=10^3 Ns/m, and the target response.

 > plot([ Target(t), Actual(10^4, 10^3, t) ], t=0..1, thickness=3, labels=["seconds","metres"],      title="Target Response and Actual Response for k = 10^4 and b = 10^3", legend=["Target Response", "Actual Response"] );

 >

These choices of k and b do not provide a good match to the target. One method for improving the match is to substitute different combinations of k and b until an acceptable solution is found. The following section describes a more systematic approach using optimization.

Measuring the Error Between the Target and Actual Responses

Ideally, error is measured as the integral of the squared difference between the target and actual responses. However, evaluating the integral in this application is extremely difficult, due to the complexity of the response function. As an approximation, sample the functions at key times and add the squared differences between the functions at these points.

Sample the output at times $t=\frac{1}{2},1,\frac{3}{2},2$.

 > time_sample:= 1/2, 1, 3/2, 2;
 ${\mathrm{time_sample}}{:=}\frac{{1}}{{2}}{,}{1}{,}\frac{{3}}{{2}}{,}{2}$ (3.1.3.1)

The objective function is the sum of squared differences between the actual and target responses at these four times. Although the error function has only two variables, it is extremely complicated.

 > Error := add( (Actual(k, b, time_sample[i]) - Target(time_sample[i]))^2, i = 1..4);
 ${\mathrm{Error}}{:=}{\left(\frac{{1}}{{20}}{}\frac{\left(\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{1800}}{}{b}{+}\frac{{1}}{{1800}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}\frac{{1}}{{20}}{}\frac{\left({-}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{1800}}{}{b}{-}\frac{{1}}{{1800}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}{0.001850800734}\right)}^{{2}}{+}{\left(\frac{{1}}{{20}}{}\frac{\left(\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{900}}{}{b}{+}\frac{{1}}{{900}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}\frac{{1}}{{20}}{}\frac{\left({-}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{900}}{}{b}{-}\frac{{1}}{{900}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}{0.0004886026617}\right)}^{{2}}{+}{\left(\frac{{1}}{{20}}{}\frac{\left(\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{600}}{}{b}{+}\frac{{1}}{{600}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}\frac{{1}}{{20}}{}\frac{\left({-}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{600}}{}{b}{-}\frac{{1}}{{600}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{-}{0.00008133982026}\right)}^{{2}}{+}{\left(\frac{{1}}{{20}}{}\frac{\left(\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{450}}{}{b}{+}\frac{{1}}{{450}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}\frac{{1}}{{20}}{}\frac{\left({-}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}{}{b}{+}{{b}}^{{2}}{-}{1800}{}{k}\right){}{{ⅇ}}^{{-}\frac{{1}}{{450}}{}{b}{-}\frac{{1}}{{450}}{}\sqrt{{{b}}^{{2}}{-}{1800}{}{k}}}}{{{b}}^{{2}}{-}{1800}{}{k}}{+}{0.000007608201709}\right)}^{{2}}$ (3.1.3.2)

Plot the error function over the k-b plane. The function is not only non-convex, but also flat in the region likely to contain the global minimum. It is not known before computation whether the plot region shown contains the desired solution.
(Note: According to differential equation theory, the imaginary terms of the response function, and therefore the error function, always cancel. However, the Maple plot function does not verify this before calculation. To plot the function correctly, plot the real part of the error function, which is identical to the function.)

 > plot3d( Re(Error), b=100..5000, k=1000..25000,         axes=boxed, shading=zhue, transparency=.35, style=patchnogrid, orientation=[75,75] );

Minimizing the Error with Global Optimization

Minimize the error function using the Global Optimization Toolbox.

 > with( GlobalOptimization );

 $\left[{\mathrm{GetLastSolution}}{,}{\mathrm{GlobalSolve}}{,}{\mathrm{Interactive}}\right]$ (3.1.4.1)

The toolbox finds the global minimum.  Notice that the error is very close to zero, as desired.

 > sol := GlobalSolve( Re(Error), b=100..10000, k=100..40000 );
 ${\mathrm{sol}}{:=}\left[{4.99174935325955514}{}{{10}}^{{-15}}{,}\left[{b}{=}{4410.14572930915}{,}{k}{=}{22054.5315495189}\right]\right]$ (3.1.4.2)

Plot the error function in a neighborhood of the global minimum. Even in this small region, it is not clear from the plot where the global minimum lies.

 > plot3d( Re(Error), b=4300..4600, k=21000..23000,         axes=boxed, shading=zhue, transparency=.35, style=patchnogrid, orientation=[75,75] );

Testing the Result

Overlay plots of the actual response, with the values for k and b found using Global Optimization, and the target response. They match very well.

 > assign( sol[2] );
 > plot([Target(t), Actual(k, b, t)+.05, .05  ], t=0..1,         labels=["seconds","metres"], title = "Target and Actual Responses Overlaid (with vertical offset)", thickness=2, color=[red,green,black]);

Plotting the difference between the target and actual responses, even the greatest difference is negligible.

 > plot(abs(Target(t) - Actual(k, b, t)), t=0..1,         title="Absolute Value of Target minus Actual Response", labels=["seconds","meters"], thickness=2);

Designing the Optimal Perfume Bottle

A fragrance company wants to design a new perfume bottle of a given volume that is inexpensive to ship. The shipping cost is proportional to the volume of the box used. Minimize this volume while preserving the volume of the bottle and the aesthetic appeal of the design.

The design of the bottle is an ellipsoid with a flat base. There are four problem variables: the three eccentricity parameters of the ellipsoid and the height at which the ellipsoid is cut to form the base.

Modeling the Bottle

 > restart;
 > interface(warnlevel=0): with(plots): with(plottools):
 > with( GlobalOptimization );
 $\left[{\mathrm{GetLastSolution}}{,}{\mathrm{GlobalSolve}}{,}{\mathrm{Interactive}}\right]$ (3.2.1.1)

Model the shape of the bottle as an ellipsoid with parameters a, b and c, centered around the origin. The top is ellipsoidal, but the ellipsoid ends below the plane $z=-h$, where h is the fourth parameter of the model.  Measure the distances in cm and the volume in mL.

 > bottleShape := x^2/a^2 + y^2/b^2 +z^2/c^2 = 1;
 ${\mathrm{bottleShape}}{:=}\frac{{{x}}^{{2}}}{{{a}}^{{2}}}{+}\frac{{{y}}^{{2}}}{{{b}}^{{2}}}{+}\frac{{{z}}^{{2}}}{{{c}}^{{2}}}{=}{1}$ (3.2.1.2)

Plot a generic bottle with a flat bottom. The goal is to improve this shape using optimization.

 > implicitplot3d( eval(bottleShape,{a=2,b=1.5,c=1}), x=-2..2, y=-1.5..1.5, z=-6/10..1,                  scaling=constrained, orientation=[75,80], style=patchnogrid, lightmodel=light1);
 >

The Objective Function

Objective function: Package volume

The box that contains the bottle is a parallelepiped of width 2a, length 2b and height $h+c+1$ cm, where the extra centimeter allows room for the spray pump. The objective is to minimize $4ab\left(c+h+1\right)$

 > packageVolume := 4*a*b*(c + h + 1);
 ${\mathrm{packageVolume}}{:=}{4}{}{a}{}{b}{}\left({c}{+}{h}{+}{1}\right)$ (3.2.2.1)

The Constraints

Constraint 1:  Bottle volume

The bottle must hold at least 40 mL of perfume. The volume of the bottle is the area of a horizontal cross-section of the ellipsoid as a function of z, integrated over the interval $z=-h..c$

 > z_crossSection := Pi*a*b*(1-z^2/c^2);
 ${\mathrm{z_crossSection}}{:=}{\mathrm{π}}{}{a}{}{b}{}\left({1}{-}\frac{{{z}}^{{2}}}{{{c}}^{{2}}}\right)$ (3.2.3.1)

 > perfumeVolume := Int( z_crossSection, z=-h..c ); perfumeVolume := value( perfumeVolume );
 ${\mathrm{perfumeVolume}}{:=}{{∫}}_{{-}{h}}^{{c}}{\mathrm{π}}{}{a}{}{b}{}\left({1}{-}\frac{{{z}}^{{2}}}{{{c}}^{{2}}}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{z}$
 ${\mathrm{perfumeVolume}}{:=}{-}\frac{{1}}{{3}}{}\frac{{\mathrm{π}}{}{a}{}{b}{}\left({{c}}^{{3}}{+}{{h}}^{{3}}\right)}{{{c}}^{{2}}}{+}{\mathrm{π}}{}{a}{}{b}{}\left({c}{+}{h}\right)$ (3.2.3.2)

Check the formula by evaluating it at $h=c$. This reduces to the formula for the volume of an ellipsoid, as expected.

 > eval( perfumeVolume, { h=c } );
 $\frac{{4}}{{3}}{}{\mathrm{π}}{}{a}{}{b}{}{c}$ (3.2.3.3)
 > constraint1 := perfumeVolume >= 40;
 ${\mathrm{constraint1}}{:=}{40}{\le }{-}\frac{{1}}{{3}}{}\frac{{\mathrm{π}}{}{a}{}{b}{}\left({{c}}^{{3}}{+}{{h}}^{{3}}\right)}{{{c}}^{{2}}}{+}{\mathrm{π}}{}{a}{}{b}{}\left({c}{+}{h}\right)$ (3.2.3.4)

Constraint 2:  Width of the base

The base must be at least 2 cm in diameter for the bottle to be stable. As a function of h, the minimum diameter of the cross-section at the plane  $z=-h$ is $2\sqrt{{b}^{2}\left(1-\frac{{h}^{2}}{{c}^{2}}\right)}$. Constrain h so that this diameter is at least 2 cm.

 > hSquareMax := solve( 4*b^2*(1-h^2/(c^2)) = 2^2, h^2 );
 ${\mathrm{hSquareMax}}{:=}\frac{\left({{b}}^{{2}}{-}{1}\right){}{{c}}^{{2}}}{{{b}}^{{2}}}$ (3.2.3.5)

This gives the following constraint.

 > constraint2 := h^2 <= hSquareMax;
 ${\mathrm{constraint2}}{:=}{{h}}^{{2}}{\le }\frac{\left({{b}}^{{2}}{-}{1}\right){}{{c}}^{{2}}}{{{b}}^{{2}}}$ (3.2.3.6)

Constraint 3: Aesthetic proportions

Consumers like sleek bottles. Though it may be the most economical design, a spherical perfume bottle will not sell. Constrain the proportions of the bottle so that $2b\le a$.

 > constraint3 := 2*b <= a;
 ${\mathrm{constraint3}}{:=}{2}{}{b}{\le }{a}$ (3.2.3.7)

Optimization

Optimization step

 > constraints := { constraint1, constraint2, constraint3 };
 ${\mathrm{constraints}}{:=}\left\{{40}{\le }{-}\frac{{1}}{{3}}{}\frac{{\mathrm{π}}{}{a}{}{b}{}\left({{c}}^{{3}}{+}{{h}}^{{3}}\right)}{{{c}}^{{2}}}{+}{\mathrm{π}}{}{a}{}{b}{}\left({c}{+}{h}\right){,}{{h}}^{{2}}{\le }\frac{\left({{b}}^{{2}}{-}{1}\right){}{{c}}^{{2}}}{{{b}}^{{2}}}{,}{2}{}{b}{\le }{a}\right\}$ (3.2.4.1)

Find the optimal solution using the Global Optimization Toolbox. The minimum packing volume is 77.69 mL, which is greater than the 40 mL volume of the bottle, as expected.

 > sol := GlobalSolve( packageVolume, constraints, a=1..10, b=1..10, c=1..10, h=1..10 );
 ${\mathrm{sol}}{:=}\left[{77.6889319197028669}{,}\left[{a}{=}{2.15295952040264}{,}{b}{=}{1.07647976020132}{,}{c}{=}{5.38628117532871}{,}{h}{=}{1.99397753934789}\right]\right]$ (3.2.4.2)

Testing

Check that the solution satisfies all of the constraints within an acceptable level of precision. For greater accuracy, use the feasibilitytolerance option in the GlobalSolve calling sequence.

 > subs( sol[2], constraints ): evalf[6](%);
 $\left\{{2.15296}{\le }{2.15296}{,}{3.97595}{\le }{3.97595}{,}{40.}{\le }{40.0000}\right\}$ (3.2.5.1)

The optimal ellipsoid:

 > bottleShape := eval( bottleShape, sol[2] );
 ${\mathrm{bottleShape}}{:=}{0.215738806226303}{}{{x}}^{{2}}{+}{0.862955224905211}{}{{y}}^{{2}}{+}{0.0344684662121025}{}{{z}}^{{2}}{=}{1}$ (3.2.5.2)

 > A := eval(a, sol[2] ): B := eval(b, sol[2] ): C := eval(c, sol[2] ): H:=eval(h, sol[2]):

A sketch of the optimal perfume bottle:

 > glass := implicitplot3d( bottleShape, x=-A..A, y=-B..B, z=-H..C,                         grid=[8,8,8], style=patchnogrid, lightmodel=light4,                         transparency=.1, color=gold): liquid := implicitplot3d( lhs(bottleShape)=.85, x=-A..A, y=-B..B, z=-.9*H..(.7)*C,                         style=patchnogrid, lightmodel=light4, transparency=.8, color=black): cap := cylinder([0,0,.95*C], A/6, color=grey): display( glass, liquid, cap, scaling=constrained, axes=box);