You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 23 July 2015 12:30:33PM 2 points [-]

A 200 IQ Stuxnet is a self improving AGI. Anything that has a real IQ is an AGI and if it's smarter than human researchers on the subject it can self-improve.

Comment author: turchin 23 July 2015 12:56:31PM 0 points [-]

It may not use its technical ability to self-improve to kill all humans. It may also limit it self to low level self- improvement aka learning. Self-improvement is not necessary condition for UFAI. But it may be one of its instruments.