For example, you might want you and everyone else to both be happy, and happiness of one without the other would be much less valuable.
Now you've got me curious. I don't see what selections of values representative of the agent they're trying to model could possibly desire non-Pareto-optimal scenarios. The given example (quoted), for one, is something I'd represent like this:
Let x = my happiness, y = happiness of everyone else
To model the fact that each is worthless without the other, let:
v1 = min(x, 10y)
v2 = min(y, 10x)
Choice A: Gain 10 x, 0 y
Choice B: Gain 0 x, 10 y
Choice C: Gain 2 x, 2 y
It seems very obvious that the sole Pareto-optimal choice is the only desirable policy. Utility is four for choice C, and zero for A and B.
This may reduce to exactly what AlexMennen said, too, I guess. I have never encountered any intuition or decision problem that couldn't at-least-in-principle resolve to a utility function with perfect modeling accuracy given enough time and computational resources.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This example doesn't satisfy the hypotheses of the theorem because you wouldn't want to optimize for v1 if your water was held fixed. Presumably, if you have 3 units of water and no food, you'd prefer 3 units of food to a 50% chance of 7 units of food, even though the latter leads to a higher expectation of v1.
You would if you could survive for v1*v2 days.