orthonormal comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong

10 Post author: TimFreeman 07 June 2011 03:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread.

Comment author: orthonormal 08 June 2011 07:42:52AM 1 point [-]

Upvoted because the objection makes me uncomfortable, and because none of the replies satisfy my mathematical/aesthetic intuition.

However, requiring utilities to be bounded also strikes me as mathematically ugly and practically dangerous– what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

Thus I view this as a currently unsolved problem in decision theory, and a better intuition-pump version than Pascal's Mugging. Thanks for posting.

Comment author: Peter_de_Blanc 08 June 2011 08:41:20AM 5 points [-]

what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

It's not worth what?

Comment author: orthonormal 08 June 2011 11:02:52PM 1 point [-]

A small risk of losing the utility it was previously counting on.

Of course you can do intuition pumps either way- I don't feel like I'd want the AI to sacrifice everything in the universe we know for a 0.01% chance of making it in a bigger universe- but some level of risk has to be worth a vast increase in potential fun.

Comment author: Peter_de_Blanc 09 June 2011 08:09:08AM 1 point [-]

It seems to me that expanding further would reduce the risk of losing the utility it was previously counting on.

Comment author: orthonormal 10 June 2011 07:33:49AM 0 points [-]

LCPW isn't even necessary: do you really think that it wouldn't make a difference that you'd care about?

Comment author: Peter_de_Blanc 10 June 2011 08:27:20AM 0 points [-]

LCPW cuts two ways here, because there are two universal quantifiers in your claim. You need to look at every possible bounded utility function, not just every possible scenario. At least, if I understand you correctly, you're claiming that no bounded utility function reflects your preferences accurately.

Comment author: drethelin 08 June 2011 05:16:16PM 1 point [-]

resources, whether physical or computational. Presumably the AI is programmed to utilize resources in a parsimonious manner, with terms governing various applications of the resources, including powering the AI, and deciding on what to do. If the AI is programmed to limit what it does at some large but arbitrary point, because we don't want it taking over the universe or whatever, then this point might end up actually being before we want it to stop doing whatever it's doing.

Comment author: Peter_de_Blanc 09 June 2011 08:09:59AM 0 points [-]

That doesn't sound like an expected utility maximizer.

Comment author: endoself 09 June 2011 12:48:14AM 0 points [-]

What wrong with this one? Would you be comfortable with that reply if it was backed by rigourous math?