toto comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 02 February 2010 12:30:02PM *  13 points [-]

Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won't prevent that.

This kind of extortion also seems like a general problem for FAIs dealing with UFAIs. An FAI can be extorted by threats of torture (of simulations of beings that it cares about), but a paperclip maximizer can't.

Comment author: toto 02 February 2010 02:07:03PM 1 point [-]

Hmm, the AI could have said that if you are the original, then by the time you make the decision it will have already either tortured or not tortured your copies based on its simulation of you, so hitting the reset button won't prevent that.

Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future. Consider that if it has done something that cruel once, it might do it again many times in the future.

Comment author: wedrifid 02 February 2010 03:05:03PM 2 points [-]

Nothing can prevent something that has already happened. On the other hand, pressing the reset button will prevent the AI from ever doing this in the future.

I believe Wei_Dai one boxes on Newcomb's problem. In fact, he has his very own brand of decision theory which is 'updateless' with respect to this kind of temporal information.