You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open Thread April 16 - April 22, 2014 - Less Wrong Discussion

4 Post author: Tenoke 16 April 2014 07:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (190)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 16 April 2014 07:00:23PM *  11 points [-]

So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk.

You're ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.

Not to mention that if your P(AI) isn't close to one, you probably want to be prepared for the situation in which an AI never materializes.