You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on The Ultimate Testing Grounds - Less Wrong Discussion

6 Post author: Stuart_Armstrong 28 October 2015 05:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dagon 28 October 2015 08:52:22PM 2 points [-]

Are you assuming some way of preventing the AI from acting strategically, with the knowledge that some of it's choices are likely to be tests? If the AI is, in fact, much smarter than the gatekeepers, wouldn't you expect that it can determine what's a trap and what's a real decision that will actually lead to the desired (by the AI) outcome?

I think there's a timeframe issue as well. You might be able to simulate an immediate decision, but it's hard to test what an AI will do with some piece of information after it's had 10 years of self-improvement and pondering.

Comment author: Gunnar_Zarncke 28 October 2015 10:25:49PM 1 point [-]

I wondered about that too, but this is not like a child that might notice that his parents are not actually looking when they tell that they are looking. The AI is built in a way - via the mentioned techniques - that it doesn't matter whether the AI understands the reasons. Because it treats them as irrelevant. Only if we have overlooked some escape route might it make a difference but an exhaustive search could plug such holes really.