Wes_W comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
Because an AI with a non-well-defined goal structure that changes it minds and turns into a paperclipper is just about as bad as building a paperclipper directly. It's not obvious to me that non-well-defined non-paperclippers are easier to make than well-defined non-paperclippers.
Paperclippers aren't dangerous unless they are fairly stable paperclippers...and something as arbitrary as papercliping is a very poor candidate for an attractor. The good candidates are the goals Omuhudro thinks AIs will converge on.
Why do you think so?
Which bit, there's about three claim there.
The second and third.
I've added a longer treatment.
http://lesswrong.com/lw/l4g/superintelligence_9_the_orthogonality_of/blsc