Nisan comments on A fungibility theorem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
I think that, depending on what the v's are, choosing a Pareto optimum is actually quite undesirable.
For example, let v1 be min(1000, how much food you have), and let v2 be min(1000, how much water you have). Suppose you can survive for days equal to a soft minimum of v1 and v2 (for example, 0.001 v1 + 0.001 v2 + min(v1, v2)). All else being equal, more v1 is good and more v2 is good. But maximizing a convex combination of v1 and v2 can lead to avoidable dehydration or starvation. Suppose you assign weights to v1 and v2, and are offered either 1000 of the more valued resource, or 100 of each. Then you will pick the 1000 of the one resource, causing starvation or dehydration after 1 day when you could have lasted over 100. If which resource is chosen is selected randomly, then any convex optimizer will die early at least half the time.
A non-convex aggregate utility function, for example the number of days survived (0.001 v1 + 0.001 v2 + min(v1, v2)), is much more sensible. However, it will not select Pareto optima. It will always select the 100 of each option; always selecting 1000 of one leads to greater expected v1 and expected v2 (500 for each).
This example doesn't satisfy the hypotheses of the theorem because you wouldn't want to optimize for v1 if your water was held fixed. Presumably, if you have 3 units of water and no food, you'd prefer 3 units of food to a 50% chance of 7 units of food, even though the latter leads to a higher expectation of v1.
You would if you could survive for v1*v2 days.
Ah, okay. In that case, if you're faced with a number of choices that offer varying expectations of v1 but all offer a certainty of say 3 units of water, then you'll want to optimize for v1. But if the choices only have the same expectation of v2, then you won't be optimizing for v1. So the theorem doesn't apply because the agent doesn't optimize for each value ceteris paribus in the strong sense described in this footnote.
Ok, this correct. I hadn't understood the preconditions well enough. It seems that now the important question is whether things people intuitively think of as different values (my happiness, total happiness, average happiness) satisfy this condition.
Admittedly, I'm pretty sure they don't.