You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Q&A with Richard Carrier on risks from AI - Less Wrong Discussion

16 Post author: XiXiDu 13 December 2011 10:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 16 December 2011 03:01:37PM *  0 points [-]

Some of those probabilities are wildly overconfident. <1 in 10^-20 for badly done superintelligence and badly done somewhat-less-superintelligence wiping out humanity?

That was:

P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20

"AGI type (a)" was previously defined to be:

(a) is probably inevitable, or at any rate a high probability, and there will likely be deaths or other catastrophes, but like other tech failures (e.g. the Titanic, three mile island, hijacking jumbo jets and using them as guided missiles) we will prevail, and very quickly [...]

So, what we may be seeing here is fancy footwork based on definitions.

If "a" = "humans win" then (humans lose | a) may indeed be very small.