**Using ***Mathematica*** on Lagrange-multiplier Problems**

Copyright © 1997, 1998, 2000 by James F. Hurley, Department of Mathematics, University of Connecticut, Storrs, CT 06269-3009. All rights reserved

The method of Lagrange multipliers is useful for finding the extreme values of a real-valued function *f* of several real variables on a subset of *n*-dimensional real Euclidean space determined by an equation *g(***x**) = 0. The method is easiest to describe in the case *n* = 2. So consider the problem of finding the maximum and minimum values of a function *f* on a curve with equation *g(x, y) *= 0. Suppose that both *f* and *g* have continuous partial derivatives.

The key point is that for any value *k*, the function *f* assumes the value *k* precisely on the level curve *f(x, y) = k*. To find the maximum or minimum value of *f* on the curve *g(x, y) = *0, it is thus enough to plot the level curves of *f* and find the largest and smallest values of *k* for which the curve *g(x, y) = *0 intersects some level curve *f(x, y) = k.*

The following figure illustrates the idea for the problem from Example 10.4 of Section 4.10: find the maximum value of *f* over the curve 2${x}^{4}$+ 3${y}^{4}$ = 32, if the formula for *f *is *f(x, y) = *${\left({x}^{2}\text{}+\text{}{y}^{2}\right)}^{1/2}$. (Compare Figure 10.3 of the text, p. 222.) To produce the figure, hit the Enter key (at the lower right of the keyboard) after the last command of the following *Mathematica* routine. (There are two preliminary partial figures first, which then combine to give the final figure.)

In[1]:=

Needs["Graphics`ImplicitPlot`"];

ContourPlot[(x^2 + y^2)^(-1/2), {x, -3, 3},

{y, -3, 3}, ContourShading -> False,

PlotPoints -> 60,

ContourSmoothing -> Automatic];

Plot[{(32/3 - 2 x^4/3)^(1/4),

-(32/3 - 2 x^4/3)^(1/4)}, {x, -2, 2},

PlotStyle -> RGBColor[0, 0, 1]];

Show[%, %%, AspectRatio -> Automatic,

AxesLabel -> {x, y}]

To find the points of intersection of the constraint curve 2${x}^{4}$+ 3${y}^{4}$ = 32 and the level curves of the function *f*, it might be feasible to solve *g(x, y)* = 0 for *y* as a function of *x.* If so, you could substitute that expression into the level curve equation *f(x, y) = k, *where *k* corresponds to the level curve tangent to the constraint curve at each point of intersection. But even from the graph, the precise value of *k* for that curve is not clear. Moreover, such elimination is not possible for most constraint functions *g*. However, not only have we met the idea of *g*(*x, y*) = 0 implicitly defining *y* as a differentiable function of *x*, but in Section 4.5 we even developed tools to study such functions.

Suppose then that $\frac{\partial g}{\partial y}$*!= *0, so that by the implicit-function theorem the constraint equation *g(x, y)* = 0 defines *y* as a differentiable function of *x. *Chain Rule 5.3 then makes it easy to differentiate *g(x, y)* = 0 with respect to *x*:

(1) 0 = $\frac{d}{\mathrm{dx}}$[*g(x, y)*] = **∇***g(x, y) *•* *($\frac{\partial x}{\partial x}$,* *$\frac{\partial y}{\partial x}$)* = ***∇***g(x, y) *•* *(1, *y'(x)*)*.*

This holds in particular at any point *P* at which *f* has an extreme value on the constraint curve *g(x, y)* = 0. The assumption that $\frac{\partial g}{\partial y}$*!= *0 assures that **∇***g(x, y)* is nonzero there. It then follows immediately from (1) that the gradient is perpendicular to the nonzero vector (1, *y'(x)*).

At an extreme-value point of *f *on the constraint curve, its derivative with respect to *x* must of course be 0. Since *g(x, y)* = 0 defines *y* as a differentiable function of *x*, calculation of that derivative proceeds just as above:

$\frac{d}{\mathrm{dx}}$[*f(x, y(x))*] = **∇***f(x, y) *•* *($\frac{\partial x}{\partial x}$,* *$\frac{\partial y}{\partial x}$)* = ***∇***f(x, y) *•* *(1, *y'(x)*)*. *For this derivative to be 0, note that the gradient of

(2) **∇***f(x, y) = **λ ***∇***g(x, y)* for some real number *λ != *0.

If **∇***f* is **0**, then (2) still holds: *λ* = 0 makes it true.This leads to the Lagrange multiplier algorithm.**Lagrange Multiplier Algorithm 10.2**. To find all candidates for extreme values of *f* subject to a constraint *g(x, y) = *0,

• solve the system of equations

* *(3) **∇***f(x, y) = **λ ***∇***g(x, y), g(x, y) = *0

for *x, y, *and *λ***. **

• evaluate *f* at every candidate (*x*, *y*) that solves the system (3)

• the largest value that results is the maximum of *f* subject to *g(x, y) = *0

• the smallest resulting value is the minimum of *f* subject to the constraint.

You really don't need to know the value of *λ*, so if possible you can eliminate *λ* and solve for all pairs (*x, y*) that satisfy (3). The obstacle to doing so by hand is the difficulty (or even impossibility) of solving (3). This is a place where *Mathematica*'s built-in Solve command can sometimes help. To illustrate, compare what follows to hand solution of the following example from the next edition of the text.

**Example****. **Find the maximum and minimum of *f *on the circle ${x}^{2}\text{}+\text{}{y}^{2}$*= *16, if *f(x, y) = *2*x - y.*

In[5]:=

F[x_, y_] := 2 x - y

G[x_, y_] := x^2 + y^2 - 16

gradf = {D[F[x, y], x], D[F[x, y], y]};

gradg = {D[G[x, y], x], D[G[x, y], y]};

Print["grad f = ", gradf]

Print["grad g = ", gradg]

$\text{grad f =}\text{}\left\{2,-1\right\}$

$\text{grad g =}\text{}\left\{2x,2y\right\}$

**Notes**: 1. When defining functions in *Mathematica*, the right subscripted

underscore character *must* appear with the initial occurrence of each

variable to the left of the := sign.

2. The *Mathematica* differentiation command has the form D[F[x,

y,...],z], where the last variable specifies the one with respect

to which to carry out the differentiation.

3. The coordinates of vectors are in braces, with commas between them.

4. Semicolons at the end of lines suppress printing results of computations.

To solve the Lagrange-multiplier system above, use the following command, which invokes the built-in Solve command. You must use the *double equal sign* when specifying an equation for *Mathematica* to work on.

In[11]:=

candidates = Solve[{gradf == lambda gradg,

G[x, y] == 0}, {x, y, lambda}]

Out[11]=

$\left\{\left\{\mathrm{lambda}\to -\frac{\sqrt{5}}{8},x\to -\frac{8}{\sqrt{5}},y\to \frac{4}{\sqrt{5}}\right\},\left\{\mathrm{lambda}\to \frac{\sqrt{5}}{8},x\to \frac{8}{\sqrt{5}},y\to -\frac{4}{\sqrt{5}}\right\}\right\}$

To complete the solution to the example, just compute the value of *f* at the candidates *Mathematica* found. The following TableForm command is a convenient means to do so. (The Print command works similarly to FORTRAN, BASIC, and other languages.) **Note**: the /. command tells *Mathematica* that several replacements are involved in the evaluation of the preceding expression.

In[12]:=

Print[" x", " ", " y", " ", " f(x, y)"]

Print["========", " ", "========", " ", "=========="]

{x, y, F[x, y]}/.candidates//TableForm

$\text{x}\text{}\text{}\text{}\text{y}\text{}\text{}\text{}\text{f(x, y)}$

$\text{========}\text{}\text{}\text{}\text{========}\text{}\text{}\text{}\text{==========}$

Out[14]//TableForm=

$-\frac{8}{\sqrt{5}}$ | $\frac{4}{\sqrt{5}}$ | $-4\sqrt{5}$ |

$\frac{8}{\sqrt{5}}$ | $-\frac{4}{\sqrt{5}}$ | $4\sqrt{5}$ |

In this case, it's clear which value of *f* is the maximum and which is the minimum, since *Mathematica* calculates only two function values, of which one is positive and the other negative. It is easy to convert the output numbers in the above table to decimal approximations: use the N[ ] command. Try the following command to obtain decimal expressions for the numbers the last routine gives:

In[15]:=

N[%]

Out[15]=

$\left\{\left\{-\mathrm{3.5777087639996634`},\mathrm{1.7888543819998317`},-\mathrm{8.94427190999916`}\right\},\left\{\mathrm{3.5777087639996634`},-\mathrm{1.7888543819998317`},\mathrm{8.94427190999916`}\right\}\right\}$

Converted by *Mathematica*
(June 11, 2003)