You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Grognor comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong Discussion

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread.

Comment author: Grognor 14 November 2011 12:48:46PM *  4 points [-]

Before I dive into this material in depth, a few thoughts:

First, I want to sincerely congratulate you on being (it seems to me) the first in our tribe to dissent.

Second, it seems your problem isn't with an intelligence explosion as a risk all on its own, but rather as a risk among other risks, one that is farther from being solved (both in terms of work done and in resolvability), and so this post could use a better title, i.e., "Why an Intelligence Explosion is a Low-Priority Global Risk", which does not a priori exclude SIAI from potential donation targets. If I'm wrong about this, and you would consider it a low-priority thing to get rid of the global risk from an intelligence explosion even aside from other global risks, I'll have to ask for an explanation.

Edit: It seems my comment has been noted and the title of the post changed.