Strange7 comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: Strange7 08 October 2010 04:24:12PM 1 point [-]

Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.

In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.

Comment author: Larks 15 December 2010 05:04:53PM 1 point [-]

Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She'll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.

Comment author: cata 15 December 2010 05:30:21PM 3 points [-]

Does it even make sense to talk about "the chance to do X at no cost to Y?" Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it's a negligible influence, but if Y's utility is literally supposed to be infinite, it would dominate.

Comment author: JoshuaZ 15 December 2010 05:34:11PM 2 points [-]

No. This is one of the problems with trying to have infinite utility. Kind Clippet won't actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.

Comment author: JGWeissman 15 December 2010 05:47:37PM 3 points [-]

And if you try to use ordinal arithmetic then the addition won't be commutative which leads to other problems.

You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won't matter.

Comment author: JoshuaZ 15 December 2010 06:04:26PM 0 points [-]

Yes, you're right. You can do this with sorted n-tuples.

Comment author: Larks 15 December 2010 05:50:49PM 0 points [-]

Just put Kind Clippet in a box with no paperclips.

Comment author: Strange7 16 December 2010 02:49:53AM 0 points [-]

That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.

Comment author: wedrifid 09 October 2010 06:08:01AM *  1 point [-]

Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.

Um... yes? That's how it works. It just doesn't particularly relate to your declaration that infinite utility is impossible (rather than my position - that is is lame).

In short, the theory that a given agent is currently, or would under some specific circumstance, experience 'infinite utility,' makes no meaningful predictions.

It is no better or worse or better than a theory that the utility function is '1' for having a paperclip and '0' for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn't infinite obviously rescales to 'infinitely small'). You appear to be confused about how the 'not testable' concept applies here...