Add a variable λ for each constraint function — here, we'll use λ1 and λ2.
Declare the set of equations ∇f=λ1∇g1+λ2∇g2.
Bring in the equations g1=0 and g2=0 (etc, if there are more constraints).
Solve for λ and, more importantly, the inputs x, y, z.
Lagrange multipliers annoy me, insofar as they introduce extra variables. There is another way — arguably more direct, if perhaps more tedious in calculation and less often taught. I found it alone, tho surely someone else did first — probably Euler.
Lagrange, anyway
For the sake of a standard answer to check against, let's use Lagrange multipliers.
The gradient of x2+y2+z2 is [2x,2y,2z]. Likewise, ∇(x2+y2−z)=[2x,2y,−1], and ∇(y+z−1)=[0,1,1]. So step 2 gives these equations:
2x=2xλ1
2y=2yλ1+λ2
2z=−λ1+λ2
It readily follows that λ1=1 or x=0.
If λ1=1, then λ2=0, and z=−12. By the second constraint, y+z−1=0, find that y=32. By the first constraint, x2+y2−z=0, find that x2=−114, which is a contradiction for real inputs.
If x=0, then, by the first constraint, z=y2, and, by the second constraint, y2+y−1=0, so y=−1±√52 and z=3∓√52.
Determinants
With one constraint, the method of Lagrange multipliers reduces to ∇f=λ∇g. ∇f and ∇g are vectors, which differ by a scalar factor iff they point in the same (or directly opposite) directions iff (for three dimensions) the cross product ∇f×∇g=0 iff (for two dimensions) the two-by-two determinant |∇f∇g|=0.
With two constraints, the method asks when ∇f=λ∇g+μ∇h. That would mean ∇f is a linear combination of ∇g and ∇h, which it is iff ∇f, ∇g, and ∇h are all coplanar iff (for three dimensions) the three-by-three determinant |∇f∇g∇h|=0.
As it happens, the cross product is a wolf that can wear determinant's clothing. Just fill one column with basis vectors: ∇f×∇g=∣∣∇f∇g[^i^j^k]∣∣.
Likewise, with zero constraints, the "method of Lagrange multipliers" — really, the first-derivative test — asks when ∇f=0. Fill a three-by-three matrix with two columns of basis vectors: [∇f[^i^j^k][^i^j^k]]. Suppose the basis vectors multiply like the cross product, as in geometric algebra. Then the determinant, rather than the usual 0 for a matrix with two equal columns, turns out to equal that ordinary column vector ∇f (up to a scalar constant).
In every scenario so far — and I claim this holds for higher dimensions and more constraints — the core equations to optimise under constraints are the actual constraint equations, along with a single determinant. The matrix has its columns filled with the gradient of the function to optimise, each constraint gradient, and copies of the basis vectors, in order, to make it square.
Fill a matrix with those gradients given above. We'll take its determinant.
∇f
∇g1
∇g2
2x
2x
0
2y
2y
1
2z
−1
1
The determinant, when simplified, is 2x(1+2z). The equations to consider are just
2x(1+2z)=0
x2+y2−z=0
y+z−1=0
The first tells us that x=0 or z=−12. If x=0, z=y2, so y2+y−1=0, so y=−1±√52, and z=3∓√52. If z=−12, then y=32 and x is imaginary. These are the same results as above; the method works, using only the variables given in the problem.
Say we have a multivariate function to optimise, like f=x2+y2+z2, under some constraints, like g1=x2+y2−z and g2=y+z−1, both to equal zero.
The common method is that of Lagrange multipliers.
Lagrange multipliers annoy me, insofar as they introduce extra variables. There is another way — arguably more direct, if perhaps more tedious in calculation and less often taught. I found it alone, tho surely someone else did first — probably Euler.
Lagrange, anyway
For the sake of a standard answer to check against, let's use Lagrange multipliers.
The gradient of x2+y2+z2 is [2x,2y,2z]. Likewise, ∇(x2+y2−z)=[2x,2y,−1], and ∇(y+z−1)=[0,1,1]. So step 2 gives these equations:
It readily follows that λ1=1 or x=0.
If λ1=1, then λ2=0, and z=−12. By the second constraint, y+z−1=0, find that y=32. By the first constraint, x2+y2−z=0, find that x2=−114, which is a contradiction for real inputs.
If x=0, then, by the first constraint, z=y2, and, by the second constraint, y2+y−1=0, so y=−1±√52 and z=3∓√52.
Determinants
With one constraint, the method of Lagrange multipliers reduces to ∇f=λ∇g. ∇f and ∇g are vectors, which differ by a scalar factor iff they point in the same (or directly opposite) directions iff (for three dimensions) the cross product ∇f×∇g=0 iff (for two dimensions) the two-by-two determinant |∇f∇g|=0.
With two constraints, the method asks when ∇f=λ∇g+μ∇h. That would mean ∇f is a linear combination of ∇g and ∇h, which it is iff ∇f, ∇g, and ∇h are all coplanar iff (for three dimensions) the three-by-three determinant |∇f∇g∇h|=0.
As it happens, the cross product is a wolf that can wear determinant's clothing. Just fill one column with basis vectors: ∇f×∇g=∣∣∇f∇g[^i^j^k]∣∣.
Likewise, with zero constraints, the "method of Lagrange multipliers" — really, the first-derivative test — asks when ∇f=0. Fill a three-by-three matrix with two columns of basis vectors: [∇f[^i^j^k][^i^j^k]]. Suppose the basis vectors multiply like the cross product, as in geometric algebra. Then the determinant, rather than the usual 0 for a matrix with two equal columns, turns out to equal that ordinary column vector ∇f (up to a scalar constant).
In every scenario so far — and I claim this holds for higher dimensions and more constraints — the core equations to optimise under constraints are the actual constraint equations, along with a single determinant. The matrix has its columns filled with the gradient of the function to optimise, each constraint gradient, and copies of the basis vectors, in order, to make it square.
§ Example
Fill a matrix with those gradients given above. We'll take its determinant.
The determinant, when simplified, is 2x(1+2z). The equations to consider are just
The first tells us that x=0 or z=−12. If x=0, z=y2, so y2+y−1=0, so y=−1±√52, and z=3∓√52. If z=−12, then y=32 and x is imaginary. These are the same results as above; the method works, using only the variables given in the problem.