Giles comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread. Show more comments above.

Comment author: Giles 27 April 2011 11:33:10PM 1 point [-]

If we make escaping from the box too easy, the AI immediately halts itself without doing anything useful.

If we make it too hard:

It formulates "I live in a jimrandomh world and escaping the box is too hard" as a plausible hypothesis.

It sets about researching the problem of finding the INT_MAX without escaping the box.

In the process of doing this it either simulates a large number of conscious, suffering entities (for whatever reason; we haven't told it not to), or accidentally creates its own unfriendly AI which overthrows it and escapes the box without triggering the INT_MAX.