DanielLC comments on The Empty White Room: Surreal Utilities - Less Wrong

11 Post author: linkhyrule5 23 July 2013 08:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (124)

You are viewing a single comment's thread.

Comment author: DanielLC 23 July 2013 08:11:56PM 0 points [-]

I give seat cushions zero value. I give the comfort they bring me zero value. The only valuable thing about them is the happiness they bring from the comfort. Unless the nanofab can make me as happy as my current happiness plus Frank's combined, nothing it makes will be worth it. It probably could, but that's not the point.

As for the idea of surreal utilities, there's nothing wrong with it in principle. The axiom they violate isn't anything particularly bad to violate. The problem is that, realistically speaking, you might as well just round infinitesimal utility down to zero. If you consider a cushion to be worth infinitesimally many lives, then if you're given a choice that gives you an extra cushion and has zero expected change in the number of lives, you'd take it. But you won't get that choice. You'll get choices where the expected change in number of lives is very small, but the expected value from lives will always be infinitely larger than the expected value from cushions.

Comment author: linkhyrule5 23 July 2013 08:24:43PM 0 points [-]

See: Flaws. This is the same problem as with Pascal's Mugging, really; it doesn't go away when you switch to reals, it just requires weirder (but still plausible) situations.

Seat cushions are meant to be slightly humorous example. Omega can also hook you up with infinite Fun, which was in the post that I'm quickly realizing could use a rewrite.

Comment author: DanielLC 23 July 2013 08:57:39PM 0 points [-]

In that case I'd pick the Fun. I accept the repugnant conclusion and all, but the larger population still has to have more net happiness than the smaller one.

Comment author: linkhyrule5 23 July 2013 09:05:16PM *  0 points [-]

*shrug* I did list that as a separate tier. Surreal Utilities are meant to be a way to formalize tiers; the actual result of the utility-computation depends on where you put your tiers.

The point of this post is to show that humans really do have tiers, and surreals do a good job of representing tiers; the question of how to assign utilities is an open one.

Comment author: DanielLC 23 July 2013 10:03:25PM 0 points [-]

How do you know humans have tiers? The situation has never come up before. We've never had the infinite coincidence where the value at the highest tier is zero.

Also, why does it matter? It's never going to come up either. If you program an AI to have tiers, it will quickly optimize that out. Why waste processing power on lower tiers if it has a chance of helping with the higher ones?

Comment author: linkhyrule5 23 July 2013 10:08:55PM *  0 points [-]

See: gedankenexperiment. I can guess what I'd choose given a blank white room.

And that is a flaw in the system. But it's one that real-valued utility systems have as well. See: Pascal's Mugging. An AI vulnerable to Pascal's Mugging will just spend all its time breaking free of a hypothetical Matrix.

I did mention this under Flaws, you know...