Giles comments on The Friendly AI Game - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (170)
If we make escaping from the box too easy, the AI immediately halts itself without doing anything useful.
If we make it too hard:
It formulates "I live in a jimrandomh world and escaping the box is too hard" as a plausible hypothesis.
It sets about researching the problem of finding the INT_MAX without escaping the box.
In the process of doing this it either simulates a large number of conscious, suffering entities (for whatever reason; we haven't told it not to), or accidentally creates its own unfriendly AI which overthrows it and escapes the box without triggering the INT_MAX.