You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Oscar_Cunningham comments on Open Thread April 16 - April 22, 2014 - Less Wrong Discussion

4 Post author: Tenoke 16 April 2014 07:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (190)

You are viewing a single comment's thread. Show more comments above.

Comment author: Oscar_Cunningham 16 April 2014 06:42:05PM 0 points [-]

Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk. If you anticipate an intelligence explosion but aren't worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).

Comment author: Lumifer 16 April 2014 07:00:23PM *  11 points [-]

So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk.

You're ignoring time. If you expect a sufficiently powerful FAI to arise, say, not earlier than a hundred years from now, and you think that the coming century has significant x-risks, focusing all the resources on the FAI might not be a good idea.

Not to mention that if your P(AI) isn't close to one, you probably want to be prepared for the situation in which an AI never materializes.

Comment author: ChristianKl 16 April 2014 08:27:05PM 4 points [-]

As far as I remember from LW census data the median date for predicted AGI intelligence explosion didn't fall in this century and more people considered bioengineered pandemics the most probably X-risk in this century than UFAI.

Comment author: satt 17 April 2014 02:53:42AM 1 point [-]

Close. Bioengineered pandemics were the GCR (global catastrophic risk — not necessarily as bad as a full-blown X-risk) most often (23% of responses) considered most likely. (Unfriendly AI came in third at 14%.) The median singularity year estimate on the survey was 2089 after outliers were removed.