You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Normal_Anomaly comments on Q&A with Richard Carrier on risks from AI - Less Wrong Discussion

16 Post author: XiXiDu 13 December 2011 10:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: Normal_Anomaly 15 December 2011 12:21:56PM 3 points [-]

Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity? Ordinary risks are "billions upon billions" of times more likely than existential risks? Maybe that one could work if every tornado that killed ten people was counted under "ordinary risks," but it's still overconfident. If he thinks things on the scale of "small nuclear war or bioterrorism" are billions of times more likely than existential risks, he's way overconfident.

Comment author: timtyler 16 December 2011 03:01:37PM *  0 points [-]

Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity?

That was:

P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20

"AGI type (a)" was previously defined to be:

(a) is probably inevitable, or at any rate a high probability, and there will likely be deaths or other catastrophes, but like other tech failures (e.g. the Titanic, three mile island, hijacking jumbo jets and using them as guided missiles) we will prevail, and very quickly [...]

So, what we may be seeing here is fancy footwork based on definitions.

If "a" = "humans win" then (humans lose | a) may indeed be very small.