wedrifid comments on A Nightmare for Eliezer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
That's a solution a human would come up with implicitly using human understanding of what is appropriate.
The best solution to the uFAI in the AI's mind might be creating a small amount of anitmatter in the uFAI lab. the AI is 99.99% confident that it only needs half of earth to achieve its goal of becoming Friendly.
The problem is explaining why that's a bad thing in terms that will allow the AI to rewrite its source code. It has no way on it's own of determining if any of the steps it thinks are ok aren't actually horrible things, because it knows it wasn't given a reliable way of determining what is horrible.
Any rule like "Don't do any big drastic acts until you're friendly" requires an understanding of what we would consider important vs. unimportant.
You're right, it would imply that the programmers were quite close to having created a FAI.