You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Incorrect comments on Trapping AIs via utility indifference - Less Wrong Discussion

3 Post author: Stuart_Armstrong 28 February 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: Incorrect 29 February 2012 07:47:26PM 0 points [-]

The AI still has a motive to escape in order to prepare to optimize its sliver. It doesn't necessarily need us to ensure it escapes faster in its sliver.

Comment author: Stuart_Armstrong 01 March 2012 02:23:44PM 0 points [-]

What does this translate to in terms of the initial setup, not the analogous one?