You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

snarles comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread.

Comment author: snarles 25 July 2015 04:58:57AM *  2 points [-]

There exists a technological plateau for general intelligence algorithms, and biological neural networks already come close to optimal. Hence, recursive self-improvement quickly hits an asymptote.

Therefore, artificial intelligence represents a potentially much cheaper way to produce and coordinate intelligence compared to raising humans. However, it will not have orders of magnitude more capability for innovation than the human race. In particular, if humans are unable to discover breakthroughs enabling vastly more efficient production of computational substrate, then artificial intelligence will likewise be unable. In that case, unfriendly AI poses an existential threat primarily through dangers that we can already imagine, rather than unanticipated technological breakthroughs.