Dagon comments on Value Stability and Aggregation - Less Wrong

8 Post author: jimrandomh 06 February 2011 06:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: Will_Sawin 06 February 2011 08:19:29PM 1 point [-]

If you have a (VNM expected) utility function and those subdivisions are also (VNM expected) utility functions, the only reasonable way to aggregate them is linear weighting.

Otherwise, the big utility function won't agree with the small utility functions about which lotteries are best.

Comment author: jimrandomh 06 February 2011 10:00:22PM 2 points [-]

I acknowledge that this is a problem, but my claim is that this is less of a problem than allowing one broken small utility function to take over the whole utility function by rescaling itself.

Comment author: Will_Sawin 06 February 2011 11:09:36PM 1 point [-]

Why do you think that the big utility function has to have problems?

I suppose because we're constructing it out of clearly-defined-but-wrong approximations to the small utility functions.

In which case, we should deviate from addition in accordance with the flaws in those approximations.

Suppose that we expect small functions to sometimes break. Then E(actual utility|calculated utility=x) looks similar to x when |x| is small, but is much closer to 0 when |X| is large. If we can estimate this S-curve, we make our method more robust against this particular problem.

Another inference we can make is that, when |x| is large, investigating whether or not expected utility is closely approximating actual utility becomes more useful, and any systems that could do this are better ideas.

We should, usually, construct the analysis of further possible problems, such as problems with this approximation, in the same manner: By looking at what deviations between estimated utility and actual utility occur.