Incorrect comments on Trapping AIs via utility indifference - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (32)
The AI still has a motive to escape in order to prepare to optimize its sliver. It doesn't necessarily need us to ensure it escapes faster in its sliver.
What does this translate to in terms of the initial setup, not the analogous one?