Lumifer comments on Open Thread April 16 - April 22, 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (190)
You're ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.
Not to mention that if your P(AI) isn't close to one, you probably want to be prepared for the situation in which an AI never materializes.