CarlShulman comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread.

Comment author: CarlShulman 13 August 2010 07:11:14AM 2 points [-]

Here's the Future of Humanity Institute's survey results from their Global Catastrophic Risks conference. The median estimate of extinction risk by 2100 is 19%, with 5% for AI-driven extinction by 2100:

http://www.fhi.ox.ac.uk/selected_outputs/fohi_publications/global_catastrophic_risks_survey

Unfortunately, the survey didn't ask for probabilities of AI development by 2100, so one can't get probability of catastrophe conditional on AI development from there.

Comment author: timtyler 13 August 2010 08:02:32AM *  7 points [-]

That sample is drawn from those who think risks are important enough to go to a conference about the subject.

That seems like a self-selected sample of those with high estimates of p(DOOM).

The fact that this is probably a biased sample from the far end of a long tail should inform interpretations of the results.

Comment author: CarlShulman 13 August 2010 06:13:54PM 6 points [-]

There is also the unpacking bias mentioned in the survey pdf. Going the other direction are some knowledge effects. Also note that most of the attendees were not AI types, but experts on asteroids, nukes, bioweapons, cost-benefit analysis, astrophysics, and other non-AI risks. It's still interesting that the median AI risk was more than a quarter of median total risk in light of that fact.

Comment author: Rain 13 August 2010 12:42:48PM *  4 points [-]

There's also the possibility that people dismiss it out of hand, without even thinking, and the more you look into the facts, the more your estimate rises. In this instance, the people at the conference just have the most facts.