You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

amcknight comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong Discussion

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: amcknight 15 November 2011 07:54:06PM 2 points [-]

Just a reminder that risk from AI can occur without recursive self-improvement. Any AGI with a nice model of our world and some goals could potentially be extremely destructive. Even if intelligence has diminishing returns, there is a huge hardware base to be exploited and a huge number of processors working millions of times faster than brains to be harnessed. Maybe intelligence won't explode in terms of self-improvement, but it can nevertheless explode in terms of pervasiveness and power.