MrMind comments on Resetting Gandhi-Einstein - Less Wrong

9 Post author: Stuart_Armstrong 13 June 2011 10:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread.

Comment author: MrMind 15 June 2011 10:47:52AM *  0 points [-]

If the AI is allowed to pass information to the resetted version, s/he will have to spend more and more time to assimilate that information. In the end, its utility will decrease to 0, unless he is given more time to assimilate the information. And the more time you give it, the more it's probable that s/he is going to be willing not to be resetted. Also, moral "quantum event" could always happen: that is, the willingness not to be resetted could always emerge even if the initial probability was very very low. In symbols, if S(t) is the state of the world at time t, t0 is the time of the upload, U = S(t0), and and P() the probability that the AI accepts a reset:

for the AI to have a utility, it's needed that:

U =/= S(t > t0)

but, even if P(U) << 1, we cannot enforce that:

P(U) = P(S(t))

for any t > t0, since information can be encoded in the state of the world.