Luke_A_Somers comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
It seems that way because we are human and we don't have a clearly defined consistent goal structure. As you find out new things you can flesh out your goal structure more and more.
If one starts with a well-defined goal structure, what knowledge might alter it?
If starting with a well defined goal structure is a necessary prerequisite for a paperclippers, why do that?
Because an AI with a non-well-defined goal structure that changes it minds and turns into a paperclipper is just about as bad as building a paperclipper directly. It's not obvious to me that non-well-defined non-paperclippers are easier to make than well-defined non-paperclippers.
Paperclippers aren't dangerous unless they are fairly stable paperclippers...and something as arbitrary as papercliping is a very poor candidate for an attractor. The good candidates are the goals Omuhudro thinks AIs will converge on.
Why do you think so?
Which bit, there's about three claim there.
The second and third.
I've added a longer treatment.
http://lesswrong.com/lw/l4g/superintelligence_9_the_orthogonality_of/blsc