You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

orthonormal comments on Hypothetical scenario - Less Wrong Discussion

-21 Post author: nick012000 16 February 2012 06:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread.

Comment author: orthonormal 17 February 2012 05:34:15AM 2 points [-]

If you're wondering why everyone is downvoting this post, this is a good place to start. While there are some existential threats that humanity could fight against after they're out of the bag (plagues, for instance), post-intelligence-explosion AI is very probably not one of them.

(Of course, an AI might be able to pose a threat even without being capable of recursive self-improvement, and in that case the threat might conceivably be significant but not beyond human capacities. But your particular scenario is more a cheesy sci-fi pitch than a realistic hypothetical.)