Wes_W comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wes_W 11 November 2014 11:10:40PM 1 point [-]

Because an AI with a non-well-defined goal structure that changes it minds and turns into a paperclipper is just about as bad as building a paperclipper directly. It's not obvious to me that non-well-defined non-paperclippers are easier to make than well-defined non-paperclippers.

Comment author: TheAncientGeek 12 November 2014 01:13:37AM *  0 points [-]

Paperclippers aren't dangerous unless they are fairly stable paperclippers...and something as arbitrary as papercliping is a very poor candidate for an attractor. The good candidates are the goals Omuhudro thinks AIs will converge on.

Comment author: Luke_A_Somers 12 November 2014 01:47:51PM 0 points [-]

Why do you think so?

Comment author: TheAncientGeek 12 November 2014 08:43:48PM 0 points [-]

Which bit, there's about three claim there.

Comment author: Luke_A_Somers 14 November 2014 12:06:29PM 0 points [-]

The second and third.

Comment author: TheAncientGeek 14 November 2014 08:53:28PM 0 points [-]